-
Notifications
You must be signed in to change notification settings - Fork 450
Optimize simple time ranged search queries #5759
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Optimize simple time ranged search queries #5759
Conversation
804be29
to
61c3b74
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your PR. The way the logic is split here between optimize_split_order
and optimize
makes it very hard to proof read. Currently it's already a bit confusing because the optimize()
logic for each variant depends on the sort order which is different for each variant. It would be good if we could avoid tightening the coupling between the two match
statements even further by making the sort orders more complex.
quickwit/quickwit-search/src/leaf.rs
Outdated
let min_required_splits = splits | ||
.iter() | ||
// splits are sorted by whether they are contained in the request time range | ||
.filter(|split| Self::is_contained(split, &request)) | ||
.map(|split| split.num_docs) | ||
// computing the partial sum | ||
.scan(0u64, |partial_sum: &mut u64, num_docs_in_split: u64| { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you explain why this works? Say you have 5 splits:
- 1 is contained in the request
- 4 are only overlapping the request
In that case min_required_splits
would be 1 here, but there might be a biggest_end_timestamp
or smallest_start_timestamp
in any of the other splits.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My thoughts were that there's a
.take_while(|partial_sum| *partial_sum < num_requested_docs)
so min_required_splits
would not get to be 1, but now I see that I should validate that we actually reached this condition, and only if we reached it to set min_required_splits
and optimize, otherwise return early.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I pushed a quick fix, I need to test it, but I think it answers the problem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't know of std::ops::ControlFlow
😄. Nevertheless, this reaches the limit when iterators stop being practical. A for
loop is much more readable here.
61c3b74
to
851c15c
Compare
It is a bit confusing, I'll try to think of a better way to do this when I have some more time. |
851c15c
to
103b8d9
Compare
When the search request contains a time range, we aborted the optimization of converting unneeded split searches into count queries.
103b8d9
to
69e85ff
Compare
I would try refactoring this into:
where
This would regroup the optimizations logics with the sorts they depend on to be correct. (To help the review, ideally, re-implement the current logic in 1 commit, and in a separate commit add you logic extension.) Thanks! |
When the search request contains a time range, we aborted the optimization of converting unneeded split searches into count queries.
Removed the
TODO
that was in the code for this.Split the PR into 2: #5758.