-
Notifications
You must be signed in to change notification settings - Fork 331
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Allow accepting and burning cycles in replicated queries #363
Conversation
rs/canister_sandbox/src/replica_controller/sandboxed_execution_controller.rs
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Spec test changes look good to me. I have left some comments on execution changes, too, to improve my understanding.
The changes make sense to me. |
I don't think so. The call context creation in this case is technically what should happen as we're receiving a new call to the canister. The shortcut we previously took to not create one is mainly because we knew that there can be no downstream calls in this case, so the call context would be immediately closed in the same execution. However, this is also a case that can happen for update calls (in case they do not make any downstream calls), so it's more like an optimization that was happening before. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, thanks!
…om the System API (#3706) Currently, there are two similar functions in the `SystemApiImpl` to extract system state modifications. One of them `take_system_state_modifications` is used in the sandbox when preparing the state changes to be transmitted back to the replica. Then, the replica after deserializing the result that it receives from the sandbox calls `into_system_state_changes` on the reconstructed `system_api` to extract the changes again. In a recent [PR](#363), `into_system_state_modifications` was changed (to make it more clear which changes are relevant per message type) but `take_system_state_modifications` wasn't. This works correctly because `into_system_state_modifications` is the last one that's called before applying the state changes back to the canister state. However, it's also a very clear example of how the code can easily diverge and go unnoticed, potentially with more severe implications in the future. This PR proposes to use one function which seems to provide the usual benefits of consistent way of applying the changes and "having one way" of doing the same task. We keep `take_system_state_modifications` (this allows us to get rid of some `clone`s but more importantly it's needed in the sandbox, see comment in the existing code) and change the call sites respectively.
This PR allows canisters to accept and burn cycles when executing queries in replicated mode (e.g. as an ingress message or when another canister calls the query method). See also spec PR.
This allows canisters to expect some payment in cases someone is calling an expensive endpoint similar to how this is possible in update calls. Given that replicated queries run across all nodes, there's no technical issue in persisting cycles changes and it gives developers another way of protecting expensive endpoints of their canisters.
The main parts of the change are the following:
Previously the sandbox would return an optional
StateModifications
object as changes would need to be persisted in update calls but not in queries. Since we would like to persist cycles changes which are part of the system state of the canister, the struct is modified to have anOption<ExecutionStateModifications>
to capture the optionality of applying execution state changes while it always includesSystemStateChanges
. The system API is also adjusted to return only the changes that are relevant per context of execution.The benefit of this is that it makes more clear what parts of the canister state can be persisted. It also allows to handle more uniformly other parts of the system state that need to be persisted for replicated queries, like canister logs. Instead of handling them separately, they could now be simply included when applying changes to the system state (not included in this PR but would be a possible follow up). Further future changes that have similar characteristics (e.g. persisting canister metrics in a way similar to logs) could be incorporated more easily.
The second big chunk of changes is in the
replicated_query
execution handler. The handler is modified to allow for handling the acceptance of cycles and refunding any remaining amount to the caller.Some tests were also added to confirm things work as expected.