Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Allow accepting and burning cycles in replicated queries #363

Merged
merged 25 commits into from
Jan 22, 2025

Conversation

dsarlis
Copy link
Member

@dsarlis dsarlis commented Jul 16, 2024

This PR allows canisters to accept and burn cycles when executing queries in replicated mode (e.g. as an ingress message or when another canister calls the query method). See also spec PR.

This allows canisters to expect some payment in cases someone is calling an expensive endpoint similar to how this is possible in update calls. Given that replicated queries run across all nodes, there's no technical issue in persisting cycles changes and it gives developers another way of protecting expensive endpoints of their canisters.

The main parts of the change are the following:

Previously the sandbox would return an optional StateModifications object as changes would need to be persisted in update calls but not in queries. Since we would like to persist cycles changes which are part of the system state of the canister, the struct is modified to have an Option<ExecutionStateModifications> to capture the optionality of applying execution state changes while it always includes SystemStateChanges. The system API is also adjusted to return only the changes that are relevant per context of execution.

The benefit of this is that it makes more clear what parts of the canister state can be persisted. It also allows to handle more uniformly other parts of the system state that need to be persisted for replicated queries, like canister logs. Instead of handling them separately, they could now be simply included when applying changes to the system state (not included in this PR but would be a possible follow up). Further future changes that have similar characteristics (e.g. persisting canister metrics in a way similar to logs) could be incorporated more easily.

The second big chunk of changes is in the replicated_query execution handler. The handler is modified to allow for handling the acceptance of cycles and refunding any remaining amount to the caller.

Some tests were also added to confirm things work as expected.

@github-actions github-actions bot added the feat label Jul 16, 2024
@dsarlis dsarlis marked this pull request as ready for review January 21, 2025 09:11
@dsarlis dsarlis requested review from a team as code owners January 21, 2025 09:11
Copy link
Contributor

@mraszyk mraszyk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Spec test changes look good to me. I have left some comments on execution changes, too, to improve my understanding.

@michael-weigelt
Copy link
Contributor

The changes make sense to me.
I wonder whether there are any assumptions about call contexts that have solidified in the codebase over time, which are now no longer true. It's hard to judge for me, but have you given it any thought? I know it's hard to prove a negative, just bringing it up to tick a box.

@dsarlis
Copy link
Member Author

dsarlis commented Jan 21, 2025

The changes make sense to me. I wonder whether there are any assumptions about call contexts that have solidified in the codebase over time, which are now no longer true. It's hard to judge for me, but have you given it any thought? I know it's hard to prove a negative, just bringing it up to tick a box.

I don't think so. The call context creation in this case is technically what should happen as we're receiving a new call to the canister. The shortcut we previously took to not create one is mainly because we knew that there can be no downstream calls in this case, so the call context would be immediately closed in the same execution. However, this is also a case that can happen for update calls (in case they do not make any downstream calls), so it's more like an optimization that was happening before.

Copy link
Contributor

@berestovskyy berestovskyy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, thanks!

@dsarlis dsarlis added this pull request to the merge queue Jan 22, 2025
Merged via the queue into master with commit 178acea Jan 22, 2025
27 checks passed
@dsarlis dsarlis deleted the dimitris/accept-cycles-replicated-query branch January 22, 2025 11:08
github-merge-queue bot pushed a commit that referenced this pull request Jan 31, 2025
…om the System API (#3706)

Currently, there are two similar functions in the `SystemApiImpl` to
extract system state modifications. One of them
`take_system_state_modifications` is used in the sandbox when preparing
the state changes to be transmitted back to the replica. Then, the
replica after deserializing the result that it receives from the sandbox
calls `into_system_state_changes` on the reconstructed `system_api` to
extract the changes again.

In a recent [PR](#363),
`into_system_state_modifications` was changed (to make it more clear
which changes are relevant per message type) but
`take_system_state_modifications` wasn't. This works correctly because
`into_system_state_modifications` is the last one that's called before
applying the state changes back to the canister state. However, it's
also a very clear example of how the code can easily diverge and go
unnoticed, potentially with more severe implications in the future.

This PR proposes to use one function which seems to provide the usual
benefits of consistent way of applying the changes and "having one way"
of doing the same task. We keep `take_system_state_modifications` (this
allows us to get rid of some `clone`s but more importantly it's needed
in the sandbox, see comment in the existing code) and change the call
sites respectively.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants