-
Notifications
You must be signed in to change notification settings - Fork 4
Antalya: Cache the list objects operation on object storage using a TTL + prefix matching cache implementation #743
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: antalya
Are you sure you want to change the base?
Conversation
18592c1
to
989cfe0
Compare
|
:) |
{ | ||
if (const auto it = cache.find(key); it != cache.end()) | ||
{ | ||
if (IsStaleFunction()(it->first)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This case is interesting. In case we find an exact match, but it has expired. Should we try to find a prefix match or simply update the entry?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, there can be a more up-to-date prefix entry, so why not try to reuse it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The only reason is that this entry would cease to exist. It would never be cached again. And it would become a linear search forever.
Actually, not forever, if the more up-to-date prefix entry gets evicted and this query is performed again, it would re-appear.
But I think you are right.
{ | ||
throw Exception( | ||
ErrorCodes::BAD_ARGUMENTS, | ||
"Using glob iterator with path without globs is not allowed (used path: {})", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shall not it be LOGICAL_ERROR ?
This looks like a branch of code where we cannot get normally (user does not select which iterator to use manually)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree, it should probably be LOGICAL_ERROR
. But:
This is mostly a copy and paste from the existing GlobIterator.
I might refactor this to avoid duplication. For now, this is just a draft implementation.
Even if I refactor this, I would opt for keeping parity with existing code & upstream. This will make the review and merges with upsteam easier
src/Core/Settings.cpp
Outdated
@@ -6108,6 +6108,9 @@ Limit for hosts used for request in object storage cluster table functions - azu | |||
Possible values: | |||
- Positive integer. | |||
- 0 — All hosts in cluster. | |||
)", EXPERIMENTAL) \ | |||
DECLARE(Bool, use_object_storage_list_objects_cache, true, R"( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please add it to the src/Core/SettingsChangesHistory.cpp
cache.setMaxCount(count); | ||
} | ||
|
||
void ObjectStorageListObjectsCache::setTTL(std::size_t ttl_) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it in seconds/miliseconds/minutes/hours?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In seconds, will modify the argument name
@@ -435,6 +436,16 @@ BlockIO InterpreterSystemQuery::execute() | |||
break; | |||
#else | |||
throw Exception(ErrorCodes::SUPPORT_IS_DISABLED, "The server was compiled without the support for Parquet"); | |||
#endif | |||
} | |||
case Type::DROP_OBJECT_STORAGE_LIST_OBJECTS_CACHE: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is caching works only on Parquet files or generally on any S3 ListObject requests?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, copy and paste issues. Should be any :D
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
throw Exception( | ||
ErrorCodes::BAD_ARGUMENTS, | ||
"Using glob iterator with path without globs is not allowed (used path: {})", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a minor nitpick, but maybe throw early, i.e. effectively do something like this?:
if (!configuration->isPathWithGlobs())
{
throw Exception(...);
}
// rest of the function as it was, but not inside indented block.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, agreed. Like I said, this is both: 1) a copy & paste; 2) wip.
I'll look into it with more attention once the core features and testing have been implemented.
Kind of problematic to test this using stateless or integeration tests. A single glob query can perform multiple list object calls, which affect profileevents counters. Not only that, but some of these list calls do not iterate through the entire list, hence not inserting into the cache. Relying on hard-coded numbers based on current behavior is kind of bad. Best I can do with stateless test would be some sort of Or test the cache alone using unit tests. |
As long as we are storing the cache in the system anyway, maybe we could make it available as some kind of system table, e.g. |
I didn't understand what you mean by this
Yeah, we need to design one that also covers #586, just not sure I'll include it in this PR |
I meant that "we have a cache, why not make an interface to view it" |
Reverted my last two commits due to the performance degradation #743 (comment) |
0b57378
to
d7b50f4
Compare
|
||
std::vector<Key> to_remove; | ||
|
||
for (auto it = cache.begin(); it != cache.end(); ++it) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't like this cycle.
In my opinion better something like (in pseudocode)
while (!key.prefix.empty())
{
if (auto res = cache.getWithKey(key))
{ // should not be more than one passed key, if key '/foo/bar' exists, key '/foo' can't be in cache
if (IsStaleFunction(res))
{
BasePolicy::remove(res);
return std::nullopt;
}
else
return res;
}
key.prefix.pop_back();
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you explain why it is better?
And why do you assume the below?
{ // should not be more than one passed key, if key '/foo/bar' exists, key '/foo' can't be in cache
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, assuming this version you suggested works, the time complexity goes to O(key_path_size). Which should probably be better than O(N).
But it won't find "the best match", tho.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just implemented it, can you please have a look?
Btw, thanks for the suggestion, it's a great one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wrote some comments.
…f O(key_size) rather than O(cache_length)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Changelog category (leave one):
Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):
Cache for listobjects calls
Documentation entry for user-facing changes