-
Notifications
You must be signed in to change notification settings - Fork 14
feat: improve lookup table overhead for cloning pipeline #457
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Conversation
This commit introduces the method to the manager. This method allows ensuring that a set of pubkeys are present in lookup tables without incrementing their reference count if they already exist. If a pubkey is not in any table, a new table is created for it, and the refcount is initialized to 1. To support this and provide better introspection, the following was added: - to query the reference count of a pubkey within a specific table. - to query the reference count of a pubkey across all active tables. Integration tests are included to verify the behavior of in different scenarios, such as when pubkeys already exist or when they are new.
- also named some complex types
…nz/committor-improve-table-speed * thlorenz/committor-increase-compute-budget: chore: adding safety multiplier to table mania CU budgets
- this added more complexity and we can treat it just like any table mania error instead, keeping the code much simpler - left in some refactorings done while adding the special handling, since they improved the code quality
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Minor comments
@@ -275,22 +276,34 @@ impl CommittorProcessor { | |||
.await; | |||
commit_stages.extend(failed_finalize.into_iter().flat_map( | |||
|(sig, infos)| { | |||
fn get_sigs_for_bundle( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This adds more lines of code than before + extra indirection
@@ -378,19 +410,12 @@ impl CommittorProcessor { | |||
CommitStage::FailedUndelegate(( | |||
x.clone(), | |||
CommitStrategy::args(use_lookup), | |||
CommitSignatures { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This version much more explicit and direct imo
fn get_refcount(&self, pubkey: &Pubkey) -> Option<usize> { | ||
self.pubkeys | ||
.get(pubkey) | ||
.map(|count| count.load(Ordering::SeqCst)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
.map(|count| count.load(Ordering::SeqCst)) | |
.map(|count| count.load(Ordering::Relaxed)) |
Such stron ordering isn't required here
// 1. Check which pubkeys already exist in any table | ||
for pubkey in pubkeys { | ||
let mut found = false; | ||
for table in self.active_tables.read().await.iter() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
self.active_tables.read().await
acquired on every pubkey. Could be locked once outside the cycle.
Note: needs to be dropped before self.reserve_new_pubkeys
Summary
This PR fixes the performance regression introduced in the previous this PR, by not waiting for the lookup table transaction to succeed.
Instead it just logs the result and ensures right before committing an account that the necessary pubkeys are in place.
Details
Table Mania Enhancements
Committor Service Optimizations
General Improvements
Integration Testing Improvements
Performance
Performance is back to what it was on master, i.e. the first clone no longer takes much longer
than subsequent clones. For comparison here are the performance results, for more details see
this PR
master
new committor
We can see that there is a huge deviation in this branch due to the first clone taking a lot
longer as it reserves the pubkeys needed to commit the cloned account in a lookup table.
new committor on this branch
We can see that the deviation is back to a sane amount.
In this case the max is still higher than on master, but that could be an outlier.
I ran the perf test another time and confirmed this: