-
Notifications
You must be signed in to change notification settings - Fork 243
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(argus): Implement Keeper state and main request processing loop #2542
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎ 6 Skipped Deployments
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR implements the KeeperState for managing open Pulse requests and introduces the core loop for processing these requests.
- Introduces the RequestFulfillmentTask structure and the RequestFulfiller trait with a default implementation.
- Updates module structure and dependency management, including new dependencies in Cargo.toml.
- Minor adjustments to import order and added TODO comments for future enhancements.
Reviewed Changes
Copilot reviewed 7 out of 7 changed files in this pull request and generated no comments.
Show a summary per file
File | Description |
---|---|
apps/argus/src/keeper/keeper_metrics.rs | Added a TODO comment to reassess the metrics for Argus. |
apps/argus/src/keeper/fulfillment_task.rs | New file implementing the request fulfillment task and trait. |
apps/argus/src/keeper/fee.rs | Adjusted import ordering of utility functions. |
apps/argus/src/keeper.rs | Updated module declarations to include the new fulfillment module. |
apps/argus/Cargo.toml | Added new dependencies: async-trait and mockall. |
Comments suppressed due to low confidence (1)
apps/argus/src/keeper/fulfillment_task.rs:19
- [nitpick] The comment contains a trailing comma after 'land' that might be unclear. Consider revising it for clarity.
// (tx failed to land,)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like it a lot that you are separating different layers and it's great. How do you foresee the overall structure of the code. Is it a shared memory (keeper state) and different tasks interacting with it? If so I'd consider adopting Hermes approach. For instance HermesUrl is not related to the main BD logic. Or you like to have tasks communicating via message passing (like actor model).
pub escalation_policy: EscalationPolicy, | ||
|
||
// The Hermes endpoint to fetch price data from | ||
pub hermes_url: Url, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe this can become a HermesClient at some point?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep definitely, had that thought as well. One doesn't exist for rust (i punted on that when working on Native...), will make one when we implement TaskFulfiller
@ali-bahjati Great! Glad you like the approach :) As far as communication between the layers goes...
This PR currently assumes that KeeperState would be shared state protected by a RwLock. The request producer obtains a write lock when it wants to update the set of requests, and the consumer mostly obtains non-blocking read locks. I think it's relatively simple this way, and we would gate external access to the state to the update() method to avoid layer violations. We should definitely monitor for lock contention though. If we went for an actor model, we would probably create a channel structure like:
The actor model can be easier to reason about imo, but potentially harder to debug since everything is connected by channels (or similar message passing) rather than function calls. It would be useful though if/when we add other ways to update the state, like via events. Not super sold either way - for now I think I'll separate concerns better between the Keeper and the Fulfiller (like the hermes_url you mentioned), keep it as shared ownership over the state with locks, and check for perf issues there when its wired up. It should be relatively easy to replace shared state with channels since the interfaces between the layers would be unchanged. |
Summary
Implement KeeperState which holds the state of open Pulse requests, and the core loop for operating on these open requests, i.e. invoking callback execution.
Rationale
The motivation here was to decouple the blockchain state implementation details from the core logic of the Keeper.
The Keeper should only be concerned with storing the current set of open requests (
KeeperState.pending_requests
), and operating on them with some kind of predictable policy (process_pending_requests()
). This function spawns fulfillment tasks (and retries) for requests that are ready to be fulfilled. (See the docstring for a more in-depth flow.)This approach is easier to unit test, versus the Fortuna approach that uses an MPSC channel to pass block ranges from a blockchain watcher to a consumer that optimistically tries to execute with backoffs.
A different process (not implemented yet) will be responsible for keeping the set of requests updated. Since we are keeping requests on-chain, this will be a simple task that polls Pulse's view function
getFirstActiveRequests
every second or so and callsKeeperState.update()
. We can optimize latency by also subscribing to events and proactively updating the state alongside of the polling.How has this been tested?
Next up: