forked from paritytech/polkadot-sdk
-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Skalman proof of possession for all cryptos #9
Draft
coax1d
wants to merge
112
commits into
master
Choose a base branch
from
skalman--proof-of-possession-for-all-cryptos
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
- Derive ProofOfPossession for all pubkey crypto type beside BLS.
- Change PoP type to be &[u8] instead of signature so it works for BLS12.
enforcing conetxt
**Summary:** This PR enables authoring of multiple blocks in one AURA slot in the slot-based collator and stabilizes the slot-based collator. ## CLI Changes The flag `--experimental-use-slot-based` is now marked as deprecated. I opted to introduce `--authoring slot-based` instead of just removing the `experimental` prefix. By introducing the `authoring` variant, we get some future-proofing in case we want to introduce further options. ## Change Description With elastic-scaling, we are able to author multiple blocks with a single relay-chain parent. In the initial iteration, the interval between two blocks was determined by the `slot_duration` of the parachain. This PR introduces a more flexible model, where we try to author multiple blocks in a single slot if the runtime allows it. The block authoring loop is largely the same. The [`SlotTimer`](https://github.com/paritytech/polkadot-sdk/blob/f1935bd96752866d52795608206e6a436929107e/cumulus/client/consensus/aura/src/collators/slot_based/slot_timer.rs#L48-L48) now lives in a separate module and is updated with the last seen [core count](https://github.com/paritytech/polkadot-sdk/blob/f1935bd96752866d52795608206e6a436929107e/cumulus/client/consensus/aura/src/collators/slot_based/block_builder_task.rs#L231-L231). It will then trigger rounds in the block-building loop based on the core count. This allows some flexibility where elastic-scaling chains can run on a single core in quiet times. Previously, running on 1 core with a 3-core elastic-scaling chain would result in authors getting skipped because the `slot_duration` was too low. ## Parameter Considerations The core logic does not change, so there are a few things to consider: - The `ConsensusHook` implementation still determines how many blocks are allowed per relay-chain block. So if you add arbitrary cores to an async-backing, 6-second parachain, `can_build_upon` in the runtime will deny block-building of additional blocks. - The `MINIMUM_PERIOD` in the runtime needs to be configured to allow enough blocks in the slot. A "classic" configuration of `SLOT_DURATION/2` will lead to slot mismatches when running with 3 cores. - We fetch available cores at least once every relay chain block. So if a parachain runs with a 12-second slot duration and 1 fixed core, we would still author 2 blocks if the parachain runtime allows it. --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Michal Kucharczyk <[email protected]> Co-authored-by: Javier Viola <[email protected]> Co-authored-by: Bastian KΓΆcher <[email protected]>
This PR includes: - deduplicating some XCM decoding logic - making use of `decode_with_depth_limit` consistently for `VersionedXcm` - some cleanup
[pallet-revive] Add support for eip1898 block notation https://eips.ethereum.org/EIPS/eip-1898 --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
This PR contains few fixes for the Promote RC to final flow: - Now the `polkadot-preapre-worker` and `polkadot-execute-worker` artefacts will be uploaded alongside with the `polakdot` artefact (it was missing before) - Added missing upload of the deb package - Few typos fixed Closes: paritytech/release-engineering#241
This PR makes the litep2p backend the default network backend in Kusama. We performed a gradual rollout in Kusama by asking validators to manually switch to litep2p. The rollout went smoothly, with 250 validators running litep2p without issues. This PR represents the next step in testing the backend at scale. Thanks to everyone who contributed to making this happen! A special shoutout to the validators for their prompt support and cooperation π While at it, the litep2p release is bumped to the latest 0.9.2, which downgrades a spamming log to debug. ### CLI Testing Done ``` ### Kusama without network backend specified RUST_LOG=info ./target/release/polkadot --chain kusama --pruning=1000 --in-peers 50 --out-peers 50 --sync=warp --detailed-log-output 2025-03-10 14:24:18.503 INFO main sub-libp2p: Running litep2p network backend ### Kusama with libp2p RUST_LOG=info ./target/release/polkadot --chain kusama --pruning=1000 --in-peers 50 --out-peers 50 --sync=warp --detailed-log-output --network-backend libp2p INFO main sub-libp2p: Running libp2p network backend ### Kusama with litep2p RUST_LOG=info ./target/release/polkadot --chain kusama --pruning=1000 --in-peers 50 --out-peers 50 --sync=warp --detailed-log-output --network-backend litep2p INFO main sub-libp2p: Running litep2p network backend ### Polkadot without network backend specified RUST_LOG=info ./target/release/polkadot --chain polkadot --pruning=1000 --in-peers 50 --out-peers 50 --sync=warp --detailed-log-output 2025-03-10 14:27:03.762 INFO main sub-libp2p: Running libp2p network backend ``` cc @paritytech/networking --------- Signed-off-by: Alexandru Vasile <[email protected]> Co-authored-by: Bastian KΓΆcher <[email protected]>
Co-authored-by: Ankan <[email protected]>
## Summary The existing fungible migration code has an issue when handling partially unbonding accounts, leaving them in an inconsistent state. These changes fix it by properly withdrawing overstake from unlock chunks. This PR also removes the `withdraw_overstake` extrinsic from pallet-staking, as this scenario could only occur before the fungible migration. With fungibles, over-staking is no longer possible. ## TODO - [ ] Backport to stable2503. --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
β¦tech#7889) The xcm-executor will not support `ExecuteWithOrigin` from the start. It might be implemented again in the future when more time can be spent on it. --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
This PR adds a convenience extrinsic `manual_slash` for the governance to slash a validator manually. ## Changes * The `on_offence` implementation for the Staking pallet accepts a slice of `OffenceDetails` including the full validator exposure, however, it simply [ignores](https://github.com/paritytech/polkadot-sdk/blob/c8d33396345237c1864dfc0a9b2172b7dfe7ac8f/substrate/frame/staking/src/pallet/impls.rs#L1864) that part. I've extracted the functionality into an inherent `on_offence` method that takes `OffenceDetails` without the full exposure and this is called directly in `manual_slash` * `manual_slash` creates an offence for a validator with a given slash percentange ## Questions - [x] should `manual_slash` accept session instead of an era when the validator was in the active set? staking thinks in terms of eras and we can check out of bounds this way, which is why it was chosen for this implementation, but if there are arguments against, happy to change to session index - [X] should the accepted origin be something more than just root? Changed to `T::AdminOrigin` to align with `cancel_deferred_slash` - [X] should I adapt this PR also against paritytech#6996? looking at the changes, it should apply mostly without conflicts --------- Co-authored-by: Tsvetomir Dimitrov <[email protected]> Co-authored-by: Ankan <[email protected]>
giving the wrong origin in `extrinsic_call` would result in: ``` | 43 | #[benchmarks] | ^^^^^^^^^^^^^ | | | expected associated type, found `Result<RawOrigin<...>, ...>` | arguments to this function are incorrect | = note: expected associated type `<T as frame_system::Config>::RuntimeOrigin` found enum `Result<RawOrigin<<T as frame_system::Config>::AccountId>, <T as frame_system::Config>::RuntimeOrigin>` note: method defined here --> $WORKSPACE/substrate/frame/support/src/traits/dispatch.rs | | fn dispatch_bypass_filter(self, origin: Self::RuntimeOrigin) -> DispatchResultWithPostInfo; | ^^^^^^^^^^^^^^^^^^^^^^ = note: this error originates in the attribute macro `benchmarks` (in Nightly builds, run with -Z macro-backtrace for more info) ``` Now it results in an error message with good span. --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Support "latest" blocktag in ethGetLogs from_block and to_block parameters This is not in specs (https://github.com/ethereum/execution-apis/blob/main/src/schemas/filter.yaml#L17) but defined and used by 3rd parties and in some other reference docs See https://docs.metamask.io/services/reference/ethereum/json-rpc-methods/eth_getlogs/ --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
# Description Part of paritytech#3326 As per title, the `pallet:getter` usage has been removed from: - `pallet-bridge-beefy` - `pallet-bridge-grandpa` - `pallet-bridge-messages` - `pallet-bridge-relayers` - `pallet-xcm-bridge-hub-router` polkadot address: 12poSUQPtcF1HUPQGY3zZu2P8emuW9YnsPduA4XG3oCEfJVp --------- Co-authored-by: Branislav Kontur <[email protected]> Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
β¦bound substreams (paritytech#7781) This PR punishes behaviors that deviate from the notification spec. When a peer misbehaves by writing data on an unidirectional read stream, the peer is banned and disconnected immediately. In this PR: - The `NotificationOutError` is enriched with termination reason and made publically available for higher levels - The protocol misbehavior is propagated through the `CloseDesired` events - The network behavior of the protocol is responsible for banning the peer. - The peer is banned immediately and, as a result, the reputation system disconnects the malicious / misbehaving peer - Logs are enriched with protocol names Closes: paritytech#7722 cc @paritytech/networking --------- Signed-off-by: Alexandru Vasile <[email protected]> Co-authored-by: Bastian KΓΆcher <[email protected]>
Update pallet-revive-fixtures so that it can build without looking up dependencies from the workspace --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
# Description It is difficult to spot changes to umbrella features when reviewing, when defined on a long line, so made sure tomls formatting will be applied to long array lines by splitting them on multiple lines. This will be applied to any tomls in the monorepo that is not excluded from taplo. ## Integration N/A ## Review Notes Set global taplo config `array_auto_expand` to true. --------- Signed-off-by: Iulian Barbu <[email protected]>
β¦7871) Set timeout to 60mins. Prevent failures like this one https://github.com/paritytech/polkadot-sdk/actions/runs/13651327605/job/38160444244?pr=7790 Thanks! --------- Co-authored-by: Bastian KΓΆcher <[email protected]>
Asset Hub was using the native token for benchmarking xcm instructions. This is not the best since it's cheaper than using something in `pallet-assets` for example. Had to remove some restrictive checks from `pallet-xcm-benchmarks`. I'll bring back the checks with a better framework in the future that allows for handling multiple assets (`fungibles::*` traits). --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Add missing pre-compiles 02 -> 09 [weights changes](https://weights.tasty.limo/compare?repo=polkadot-sdk&threshold=10&path_pattern=substrate%2Fframe%2F**%2Fsrc%2Fweights.rs%2Cpolkadot%2Fruntime%2F*%2Fsrc%2Fweights%2F**%2F*.rs%2Cpolkadot%2Fbridges%2Fmodules%2F*%2Fsrc%2Fweights.rs%2Ccumulus%2F**%2Fweights%2F*.rs%2Ccumulus%2F**%2Fweights%2Fxcm%2F*.rs%2Ccumulus%2F**%2Fsrc%2Fweights.rs&method=asymptotic&ignore_errors=true&unit=time&old=master&new=pg%2Fprecompiles02_09&pallet=revive) --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Shouldn't matter much, but this is run on every produced block so free performance
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
β -----------------------------------------------------------------------------
Thank you for your Pull Request! π Please make sure it follows the contribution guidelines outlined in this
document and fill out the
sections below. Once you're ready to submit your PR for review, please delete this section and leave only the text under
the "Description" heading.
Description
A concise description of what your PR is doing, and what potential issue it is solving. Use Github semantic
linking
to link the PR to an issue that must be closed once this is merged.
Integration
In depth notes about how this PR should be integrated by downstream projects. This part is mandatory, and should be
reviewed by reviewers, if the PR does NOT have the
R0-Silent
label. In case of aR0-Silent
, it can be ignored.Review Notes
In depth notes about the implementation details of your PR. This should be the main guide for reviewers to
understand your approach and effectively review it. If too long, use
<details>
.Imagine that someone who is depending on the old code wants to integrate your new code and the only information that
they get is this section. It helps to include example usage and default value here, with a
diff
code-block to showpossibly integration.
Include your leftover TODOs, if any, here.
Checklist
T
required)You can remove the "Checklist" section once all have been checked. Thank you for your contribution!
β -----------------------------------------------------------------------------