Skip to content

Conversation

drskalman
Copy link
Collaborator

✄ -----------------------------------------------------------------------------

Thank you for your Pull Request! 🙏 Please make sure it follows the contribution guidelines outlined in this
document
and fill out the
sections below. Once you're ready to submit your PR for review, please delete this section and leave only the text under
the "Description" heading.

Description

A concise description of what your PR is doing, and what potential issue it is solving. Use Github semantic
linking

to link the PR to an issue that must be closed once this is merged.

Integration

In depth notes about how this PR should be integrated by downstream projects. This part is
mandatory, and should be reviewed by reviewers, if the PR does NOT have the
R0-no-crate-publish-required label. In case of a R0-no-crate-publish-required, it can be
ignored.

Review Notes

In depth notes about the implementation details of your PR. This should be the main guide for reviewers to
understand your approach and effectively review it. If too long, use
<details>
.

Imagine that someone who is depending on the old code wants to integrate your new code and the only information that
they get is this section. It helps to include example usage and default value here, with a diff code-block to show
possibly integration.

Include your leftover TODOs, if any, here.

Checklist

  • My PR includes a detailed description as outlined in the "Description" and its two subsections above.
  • My PR follows the labeling requirements of this project (at minimum one label for T required)
    • External contributors: ask maintainers to put the right label on your PR.
  • I have made corresponding changes to the documentation (if applicable)
  • I have added tests that prove my fix is effective or that my feature works (if applicable)

You can remove the "Checklist" section once all have been checked. Thank you for your contribution!

✄ -----------------------------------------------------------------------------

dmitry-markin and others added 30 commits July 25, 2025 09:05
Switch to system DNS resolver instead of 8.8.8.8 that litep2p uses by
default. This enables full administrator control of what upstream DNS
servers to use, including resolution of local names using custom DNS
servers.

Fixes paritytech#9298.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…ternal (paritytech#9281)

This PR ensures that external addresses discovered by the identify
protocol are not propagated to the litep2p backend if they are not
global. This leads to a healthier DHT over time, since nodes will not
advertise loopback / non-global addresses.

We have seen various cases were loopback addresses were reported as
external:

```
2025-07-16 16:18:39.765 TRACE tokio-runtime-worker sub-libp2p::discovery: verify new external address: /ip4/127.0.0.1/tcp/30310/p2p/12D3KooWNw19ScMjzNGLnYYLQxWcM9EK9VYPbCq241araUGgbdLM    

2025-07-16 16:18:39.765  INFO tokio-runtime-worker sub-libp2p: 🔍 Discovered new external address for our node: /ip4/127.0.0.1/tcp/30310/p2p/12D3KooWNw19ScMjzNGLnYYLQxWcM9EK9VYPbCq241araUGgbdLM
```

This PR takes into account the network config for
`allow_non_global_addresses`.

Closes: paritytech#9261

cc @paritytech/networking

---------

Signed-off-by: Alexandru Vasile <[email protected]>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…2506 release branch back to master (paritytech#9320)

This PR backports:
- NODE_VERSION bumps
- spec_version bumps
- prdoc reordering
from the release branch back to master

---------

Co-authored-by: ParityReleases <[email protected]>
…f hardcoded one in benchmarks (paritytech#9325)

This PR is a simple fix for issue paritytech#9324, by making the benchmarks of
`pallet-im-online` linear up to `pallet_im_online::Config::MaxKeys`
instead of the hardcoded constant `MAX_KEYS = 1000`.

This should allow any runtime that uses `pallet-im-online` with less
than 1000 max keys to be able to benchmark the pallet correctly.
…tech#9335)

This pull requests adds some storage values to the whitelisted storage
item list, because they are written in every block. Also it stops double
killing `InherentsApplied`. It is killed in `on_finalize`, so there is
no need to do it again in `on_initialize`.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…9189)

This PR improves handling of the following scenario:
```
send tx1: transfer to fund new  X account 
# wait for tx1 in block event (let's assume it happens at block N) 
send tx2: spend from X account
```

Before this PR `tx2` could be invalidated (and most likely was) when
`block N-k` was finalized, because transactions are checked for being
invalid on finalized block. (The `X account` does not yet exists for any
block before `block N`).

After this commit transactions will be revalidated on finalized blocks
only if their height is greater then height of the block at which
transactions was originally submitted.

Note: There are no guarantees that `tx2` will be actually included, it
still may happen that it will be dropped under some circumstances. This
only reduces likelihood of dropping transaction.


Note for reviewers:
The fix is to simply initialize
[`validated_at`](https://github.com/paritytech/polkadot-sdk/blob/f8a1fe64c29b1ddcb5824bbb3bf327f528f18d40/substrate/client/transaction-pool/src/fork_aware_txpool/tx_mem_pool.rs#L98-L99)
field of `TxInMemPool` which is used to
[select](https://github.com/paritytech/polkadot-sdk/blob/f8a1fe64c29b1ddcb5824bbb3bf327f528f18d40/substrate/client/transaction-pool/src/fork_aware_txpool/tx_mem_pool.rs#L583-L586)
transactions for mempool revalidation on finalized block.

Fixes: paritytech#9150

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Iulian Barbu <[email protected]>
…9308)

This PR replaces `log` with `tracing` instrumentation on
`pallet-bridge-messages` by providing structured logging.

Partially addresses paritytech#9211
This PR introduces creation of view for known best block during
instantiation of `fatxpool`. This is intended to fix an instant-seal
nodes, where block building is
[triggered](https://github.com/paritytech/polkadot-sdk/blob/73b44193c8e66acd699f04265027289d030f6c66/substrate/client/consensus/manual-seal/src/lib.rs#L238)
via transaction import. Without views, no event is generated on this
stream when transaction is submitted.

##### Notes for reviewers
View is injected once after an empty fatxpool is instantiated by any of
`new_*` functions. Small refactor was done to re-use the code. Tests
were adjusted to match new behavior.

Seems that it is the easiest fix that we can do.


Todo:
- [x] run integrations tests,
- [x] confirm the fix with minimal-node,

Fixes: paritytech#9323

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Bastian Köcher <[email protected]>
)

This PR replaces `log` with `tracing` instrumentation on
`bridge-runtime-common` by providing structured logging.

Partially addresses paritytech#9211

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Andrii <[email protected]>
## Context

The offence handling pipeline has four main stages:

1. **Reporting on RC**: Offences are reported on the Relay Chain (RC)
and exported to Asset Hub (AH) via RC::AHClient.
2. **Queueing**: AH staking pallet receives the offence in `fn
on_new_offence`, performs sanity checks, and enqueues it in
`OffenceQueue` and `OffenceQueueEras`.
3. **Processing**: Offences are processed one by one, starting from the
oldest era in the queue. Processed items are stored in
`UnappliedSlashes`.
4. **Application**: Finally, slashes are applied one page per block
after the slash defer duration from the offence era.

---

## Problem

While unlikely, a spam of offence reports could slow down processing
enough that some offences remain unhandled even after their bonding
period ends.

This creates a rare corner case: a withdrawal could happen for an era
that still has pending offences, which breaks slashing guarantees.

Also, slash application happens gradually (one page per block). If some
slashes are left unapplied at the end of their application era (due to
chain stalls or similar), they must be manually applied using the
permissionless `apply_slash` call.

Both scenarios are rare, but they expose risks to the integrity of
slashing.

---

## What this PR Changes

### 1. Block withdrawals for eras with unprocessed offences
Withdrawals are now restricted to the **minimum of:**

- The active era, and
- The last fully processed offence era.

This ensures withdrawals don't happen for eras that still have pending
offences.

**Why not block withdrawals per account instead?**  
That would require scanning each page of `ErasStakersPaged` for the
validator the staker is exposed to — which is costly. Since this is an
edge case, blocking at the era level is simpler and sufficient.

---

### 2. Block withdrawals if unapplied slashes remain in the previous era
Introduces a new safefguard: withdrawals are blocked if the immediately
concluded era has unapplied slashes. Once the era is cleared,
withdrawals resume as normal. We also only care about previous era, and
if this ends up not enough to nudge participants to clear the unapplied
slashes, the withdrawals should resume again in the next era (provided
no new unapplied slashes remain in current era as well).

When this happens, trying to withdraw would emit the error
`UnappliedSlashesInPreviousEra`. Anyone can look up the unapplied
slashes in the previous era through the storage `UnappliedSlashes` and
apply these via the permissionless call `apply_slash`.

This light enforcement should be enough to maintain slashing guarantees
without being too disruptive.

---

### 3. Ensure a full era for applying slashes
Previously, it was possible to receive an offence report at the very end
of the era when its slashes were meant to be applied.

We now reject offences that arrive **after** the end of the era *before*
their application era. An event `OffenceTooOld` is emitted when this
happens to make the behavior visible.

**Open question:**  
We may want to update the `prune_up_to` value sent from AH to RC to
`ActiveEra - SlashDeferDuration + 1` instead of `ActiveEra -
BondingDuration`. This could further guarantee that late offences never
reach the staking pallet.

---

### 4. Unbonding chunks are keyed by active era
We’re moving away from using `CurrentEra` in business logic (except for
elections). This change aligns unbonding with `ActiveEra`. The rest of
the code will be refactored in
[paritytech#8807](paritytech#8807).

---

### 5. More checks on offence pipeline health
Added extra try state checks to ensure the offence processing state is
healthy.

---

## Notes

This is mostly a defensive improvement. These situations are extremely
rare, but the added safeguards ensure slashing guarantees are upheld
even in these extreme cases.
closes paritytech#8785

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Oliver Tale-Yazdi <[email protected]>
…ech#9348)

This PR adds a node version to the announcement message sent to the
matrix channels, when the new stable release is published.
<img width="834" height="178" alt="Screenshot 2025-07-28 at 14 09 03"
src="https://github.com/user-attachments/assets/8b27997f-55f5-47d5-8538-e1cde420b8b0"
/>


Closes: paritytech/release-engineering#268
Related to paritytech#8860

This PR adds a check in order to ensure that the collator has respected
the proper order when sending the HRMP messages to the runtime.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Andrii <[email protected]>
…#9315)

Increase Kademlia memory store capacity for DHT content providers (used
by parachain DHT-based bootnodes) and reduce provider republish interval
& TTL. This is needed to support testnets with 1-minute fast runtime and
up to 13 parachains.

Parameters set:
- 10000 provider keys per node
- 10h provider record TTL
- 3.5h provider republish interval

Closes paritytech/litep2p#405.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…h#9318)

This PR replaces `log` with `tracing` instrumentation on
`pallet-bridge-parachains` by providing structured logging.

Partially addresses paritytech#9211
# Description

Updates the base image in the Polkadot builder Dockerfile

Closes paritytech#9306
## Integration

Not applicable - this PR has no downstream integration impacts as it
only affects the local build environment

## Review Notes

This PR updates the builder base image version in
`polkadot_builder.Dockerfile`.

Co-authored-by: Alexander Samusev <[email protected]>
)

Add missing implementation of `InspectMessageQueues` for
`UnpaidRemoteExporter`

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…t" (paritytech#9355)

The crate `substrate-prometheus-endpoint` use tokio items given by the
feature "net" but it doesn't explictly requires it in the `Cargo.toml`.
It compiles on master because `hyper-util` enables the feature
"tokio/net". But upgrading `hyper-util` break this indirect enabling.

This fix the issue by directly setting "net" feature as required, as it
is used.
We should also backport this ideally. It is not a breaking change given
the code doesn't compile without the feature and only compiles if
indirectly enabled by another crate.

# Reproduce:

To reproduce do `cargo update -p hyper-util && cargo check -p
substrate-prometheus-endpoint`:

You will get the error:
```
error[E0433]: failed to resolve: could not find `TcpListener` in `net`
  --> substrate/utils/prometheus/src/lib.rs:89:29
   |
89 |     let listener = tokio::net::TcpListener::bind(&prometheus_addr).await.map_err(|e| {
   |                                ^^^^^^^^^^^ could not find `TcpListener` in `net`
   |
note: found an item that was configured out
  --> /home/gui/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.45.0/src/net/mod.rs:43:28
   |
43 |     pub use tcp::listener::TcpListener;
   |                            ^^^^^^^^^^^
note: the item is gated behind the `net` feature
  --> /home/gui/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.45.0/src/net/mod.rs:38:1
   |
38 | / cfg_net! {
39 | |     mod lookup_host;
40 | |     pub use lookup_host::lookup_host;
...  |
51 | |     }
52 | | }
   | |_^
   = note: this error originates in the macro `cfg_net` (in Nightly builds, run with -Z macro-backtrace for more info)
help: consider importing this struct
   |
20 + use std::net::TcpListener;
   |
help: if you import `TcpListener`, refer to it directly
   |
89 -     let listener = tokio::net::TcpListener::bind(&prometheus_addr).await.map_err(|e| {
89 +     let listener = TcpListener::bind(&prometheus_addr).await.map_err(|e| {
   |

error[E0412]: cannot find type `TcpListener` in module `tokio::net`
  --> substrate/utils/prometheus/src/lib.rs:99:24
   |
99 |     listener: tokio::net::TcpListener,
   |                           ^^^^^^^^^^^ not found in `tokio::net`
   |
note: found an item that was configured out
  --> /home/gui/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.45.0/src/net/mod.rs:43:28
   |
43 |     pub use tcp::listener::TcpListener;
   |                            ^^^^^^^^^^^
note: the item is gated behind the `net` feature
  --> /home/gui/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.45.0/src/net/mod.rs:38:1
   |
38 | / cfg_net! {
39 | |     mod lookup_host;
40 | |     pub use lookup_host::lookup_host;
...  |
51 | |     }
52 | | }
   | |_^
   = note: this error originates in the macro `cfg_net` (in Nightly builds, run with -Z macro-backtrace for more info)
help: consider importing this struct
   |
20 + use std::net::TcpListener;
   |
help: if you import `TcpListener`, refer to it directly
   |
99 -     listener: tokio::net::TcpListener,
99 +     listener: TcpListener,
   |

Some errors have detailed explanations: E0412, E0433.
For more information about an error, try `rustc --explain E0412`.
error: could not compile `substrate-prometheus-endpoint` (lib) due to 2 previous errors
```

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Bastian Köcher <[email protected]>
…#9353)

# Description

Please consider this Pull Request to remove the bootnodes provided by
Gatotech to the following relaychain and systemchains:

- `westend`
  - `asset-hub-westend`
  - `bridge-hub-westend`
  - `collectives-westend`
  - `coretime-westend`
  - `people-westend`

This removal responds to the discontinuation of support by the
Infrastructure Builders' Programme of Westend in favour of enhanced
support to the Paseo testnet.

After this PR is merged, we will proceed to decommission the relevant
nodes..

Many thanks!!

Best regards

**_Milos_**

Co-authored-by: Bastian Köcher <[email protected]>
Deep inside subxt the default period for a transaction is set to 32
blocks. When you have some chain that is building blocks every 500ms,
this may leads to issues that manifest as invalid transaction
signatures. To protect the poor developers of endless debugging sessions
we now send transactions as immortal.
…h#9354)

Relates to: paritytech#9336,
paritytech#7321

This PR aims to normalize result of `stringify` in scenarios when used
inside nested macros to stringify token streams for benchmarking
framework. Different versions of rust can include, or not, "space"
characters around tokens like `<`,`>`,`::` so we are just removing
additional spaces.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
## Changes
- Updated the `Held Balance` definition to reflect the current behavior.
The previous explanation was accurate when staking used locks (which
were part of the free balance), but since [staking now uses
holds](paritytech#5501), the old
definition is misleading.
This issue was originally pointed out by @michalisFr
[here](w3f/polkadot-wiki#6793 (comment)).
- Fixed a broken reference in the deprecated doc for `ExposureOf`, which
was (ironically) pointing to a non-existent type named `ExistenceOf`.
This slipped in during our [mega async staking
PR](paritytech#8127).
…aritytech#9187)

## Problem

Previously, the `cancel_deferred_slash` function required exact slash
keys (validator, slash fraction, page index) to cancel slashes. However,
when additional offence reports arrived after a cancellation referendum
was initiated, they could create new entries with higher slash
fractions, making the original cancellation ineffective.

### Changes

We introduce a new approach that tracks cancelled slashes by era and
validator with their maximum slash fractions:

1. **New Storage**: Added `CancelledSlashes` storage map that stores
cancellation decisions by era.
2. **Updated API**: Changed call signature `cancel_deferred_slash` to
accept `Vec<(AccountId, Perbill)>` instead of complex slash keys. Admin
origin can now specify which validators to cancel and up to what slash
fraction.
3. **Cleanup**: `CancelledSlashes` are cleared after all slashes for an
era are processed.
4. **Updated SlashCancelled Event**: Event contains only slash_era and
validator instead of slash key tuple and payout.

---------

Co-authored-by: Paolo La Camera <[email protected]>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Changes:
- Add `--runtime-log` option to omni-bencher CLI
- Read env var `RUNTIME_LOG` as fallback to the `--runtime-log` option
- Set custom log level for runtime benchmarks that can be different form
CLI level
- Fix issue where old runtimes have a space in the pallet or instance
name from breaking change in `quote` macro

Note: I saw that the genesis builder is not using the provided host
functions, hence it still logs things during genesis config building.

---------

Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Bastian Köcher <[email protected]>
Co-authored-by: Bastian Köcher <[email protected]>
Co-authored-by: Branislav Kontur <[email protected]>
paritytech#9380)

Which will consequently make the XCM/MQ code path aware of the weights,
which was previously not the case.

Additionally, adds an event for when an era is pruned.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Paolo La Camera <[email protected]>
This pull requests implements support for ignoring trie nodes while
recording a proof. It directly includes the feature into
`basic-authorship` to later make use of it in Cumulus for multi-block
PoVs.

The idea behind this is when you have multiple blocks per PoV that trie
nodes accessed or produced by a block before (in the same `PoV`), are
not required to be added to the storage proof again. So, all the blocks
in one `PoV` basically share the same storage proof. This also impacts
things like storage weight reclaim, because ignored trie node do not
contribute a to the storage proof size (similar to when this would
happen in the same block).

# Example 

Let's say block `A` access key `X` and block `B` accesses key `X` again.
As `A` already has read it, we know that it is part of the storage proof
and thus, don't need to add it again to the storage proof when building
`B`. The same applies for storage values produced by an earlier block
(in the same PoV). These storage values are an output of the execution
and thus, don't need to be added to the storage proof :)


Depends on paritytech#6137. Base
branch will be changed when this got merged.

Part of: paritytech#6495

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Michal Kucharczyk <[email protected]>
… chain (paritytech#7321)

### Title: Update XCM benchmarks for sibling parachain delivery (closes
paritytech#7211)

### Description:
This PR updates XCM benchmarking configurations for testnet system
parachains to reflect delivery to sibling parachains instead of the
Parent relay chain.

Integration:

Replaced ToParentDeliveryHelper with ToParachainDeliveryHelper. 

Updated benchmark destinations

---------

Co-authored-by: Branislav Kontur <[email protected]>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Karol Kokoszka <[email protected]>
Co-authored-by: Karol Kokoszka <[email protected]>
Relates to: paritytech#9093
Requires: paritytech#9179

This PR introduces emulated test scenarios:

#### [Scenario 1]
(Penpal -> AH -> Penpal) to showcase usage of remote `Transact` to swap
assets remotely on AssetHub while also making use of
`add_authorized_alias`, to transact as Sender on remote side (instead of
Senders sovereign account).

1. Prepare sovereign accounts funds, create pools, prepare aliasing
rules
2. Send WND from Penpal to AssetHub (AH being remote reserve for WND)
3. Alias into sender account and exchange WNDs for USDT using `Transact`
with `swap_tokens_for_exact_tokens` call inside
4. Send USDT and leftover WND back to Penpal

#### [Scenario 2]
(Penpal -> AH -> Penpal) to showcase usage of remote `Transact` to swap
assets remotely on AssetHub.

1. Prepare sovereign accounts funds, create pools, prepare aliasing
rules
2. Send WND from Penpal to AssetHub (AH being remote reserve for WND)
3. Exchange WNDs for USDT using `Transact` with
`swap_tokens_for_exact_tokens` call inside
4. Send USDT and leftover WND back to Penpal

#### [Scenario 3]
(Penpal -> AH -> Penpal) to showcase same as above but this time using
`ExchangeAsset` XCM instruction instead of `Transact`:

1. Prepare sovereign accounts funds, create pools
2. Send WND from Penpal to AssetHub (AH being remote reserve for WND)
3. Exchange WNDs for USDT using `ExchangeAsset`
4. Send USDT and leftover WND back to Penpal

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Adrian Catangiu <[email protected]>
raymondkfcheung and others added 16 commits July 31, 2025 12:36
This PR replaces `log` with `tracing` instrumentation on
`pallet-bridge-beefy` by providing structured logging.

Partially addresses paritytech#9211
Make some stuff public and derive traits. Also removes one silently
truncating constructor from ParaId.

---------

Signed-off-by: Oliver Tale-Yazdi <[email protected]>
closes paritytech#8766

This PR mainly adds a setup based on PAPI to automate our e2e tests for
staking async. Most of the new code is in
`frame/staking-async/runtimes/papi-tests`. There is `README`, and a
`Justfile` there that should contain all the info you would need.

Best way to get started is:

```
just setup
bun test tests/unsigned-dev.test.ts
```

Tests are written in Typescript, and monitro the underlying ZN process
for a specific sequence of events. An example of how to write tests is
[here](https://github.com/paritytech/polkadot-sdk/pull/8802/files#diff-4b44e03288aeaf5ec576ae0094c7a7ae28689dfcc5b317a28478767b345991db).

All other changes are very insubstantial. 

### Why this setup? 

* Staking async e2e tests are long running, and doing multiple scenarios
manually is hard. Expressing them as a sequence of events is much
easier.
* For all scenarios, we need to monitor both the onchain weight, and the
offchain weight/PoV recorded by the collator (therefore our only option
is ZN). The setup reports both. For example, the logs look like this:

```
verbose: Next expected event: Observe(Para, MultiBlockElectionVerifier, Verified, no dataCheck, no byBlock), remaining events: 14
verbose: [Para#56][⛓ 52ms / 2,119 kb][✍️ hd=0.22, xt=3.94, st=6.54, sum=10.70, cmp=9.61, time=1ms] Processing event: MultiBlockElectionVerifier Verified [1,10]
info:    Primary event passed
verbose: Next expected event: Observe(Para, MultiBlockElectionVerifier, Verified, no dataCheck, no byBlock), remaining events: 13
verbose: [Para#56][⛓ 52ms / 2,119 kb][✍️ hd=0.22, xt=3.94, st=6.54, sum=10.70, cmp=9.61, time=1ms] Processing event: MultiBlockElectionVerifier Verified [2,10]
info:    Primary event passed
verbose: Next expected event: Observe(Para, MultiBlockElectionVerifier, Verified, no dataCheck, no byBlock), remaining events: 12
verbose: [Para#56][⛓ 52ms / 2,119 kb][✍️ hd=0.22, xt=3.94, st=6.54, sum=10.70, cmp=9.61, time=1ms] Processing event: MultiBlockElectionVerifier Verified [3,10]
```

`⛓` indicates the onchain weights and `✍️` the collator PoV date
(header, extrinsic, storage, sum of all, and all compressed,
respectively). The above lines are an example of code paths where the
onchain weight happens to over-estimate by a lot. This setup helps us
easily find and optimize all.

---------

Co-authored-by: Tsvetomir Dimitrov <[email protected]>
Co-authored-by: Paolo La Camera <[email protected]>
Co-authored-by: Dónal Murray <[email protected]>
Co-authored-by: Ankan <[email protected]>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Alexandre R. Baldé <[email protected]>
This upgrades wasmtime to the latest version and also fixes backtraces
for `debug` builds.

---------

Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Oliver Tale-Yazdi <[email protected]>
…itytech#92… (paritytech#9423)

# Description

This PR reverts paritytech#9207 after @michalkucharczyk's proper fix in paritytech#9338.

## Integration

N/A

## Review Notes

N/A
## Westend Secretary Program

This PR includes the Secretary program and end-to-end validation of
XCM-based salary payments for the Westend runtime, ensuring consistency
between implementations.

### Key Changes
1. Integrated Secretary configuration into Westend runtime
- Added `SecretaryCollective` and `SecretarySalary` pallets to the
runtime.
   - Triggers salary payment through XCM
   - Verifies successful:
     - XCM message transmission
     - Asset transfer execution
     - Message queue processing

### Context from Runtime PRs
- Based on [Secretary Program
implementation](polkadot-fellows/runtimes#347)
- Follows patterns established in [Fellowship salary
tests](https://github.com/paritytech/polkadot-sdk/blob/master/cumulus/parachains/integration-tests/emulated/tests/collectives/collectives-westend/src/tests/fellowship_salary.rs)
- Addresses feedback from original implementation:
  - Simplified polling mechanism using `NoOpPoll`
  - Maintained consistent salary structure (6666 USDT for rank 1)
  - Kept same XCM payment configuration
…ytech#9416)

The existential deposit to create a new account is part of the storage
deposit. Hence if the storage deposit limit is too low to create a new
account we fail the transaction. However, this limit was not enforced
for plain transfers. The reason is that we only enforce the limit at the
end of each frame. But for plain transfers (transferring to a non
contract) there is no frame.

This PR fixes the situation by enforcing the limit when transferring the
existential deposit in order to create a new account.

---------

Co-authored-by: PG Herveou <[email protected]>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…sed (paritytech#9419)

This prints more information on why a collation wasn't advertised. In
this exact case it checks if the collation wasn't advertised because of
a session change. This is mainly some debugging help.
This PR replaces `log` with `tracing` instrumentation on `xcm-emulator`
by providing structured logging.

Continues paritytech#8732
This PR replaces `log` with `tracing` instrumentation on `bp-runtime` by
providing structured logging.

Partially addresses paritytech#9211
Instead of using the name of the node, we should use `Relaychain` as
done by normal nodes. This makes it easier to read the logs.
…aritytech#9393)

Adds a total of 4 new jobs to `Release - Build node release candidate`
CI workflow
- 2 for releasing `substrate-node` binaries for linux/mac
- 2 for releasing `eth-rpc` binaries for linux/mac

CLOSES: paritytech#9386

---------

Co-authored-by: EgorPopelyaev <[email protected]>
fix issue: paritytech/contract-issues#141

Corrects the condition for minting a new currency unit when transferring
dust. The condition was incorrectly checking
`to_info.dust.saturating_add(dust) >= plank` which could lead to
unexpected minting behavior. It now correctly checks if `to_info.dust >=
plank` before minting.
Make sure to run `just killall` step in a bash shell, otherwise while
trying to kill a non existing process e.g.
```bash
killall:
  pkill -f zombienet || true
```

we would get the following issue while running in a container:

```bash
Run just setup
  just setup
  shell: sh -e {0}
  env:
    IMAGE: docker.io/paritytech/ci-unified:bullseye-1.85.0-2025-01-28-v202504231537
    RUST_INFO: rustup show && cargo --version && rustup +nightly show && cargo +nightly --version
    CACHE_ON_FAILURE: true
    CARGO_INCREMENTAL: 0
🧹 Killing any existing zombienet or chain processes...
pkill -f zombienet || true
error: Recipe `killall` was terminated on line 124 by signal 15
error: Recipe `setup` failed with exit code 143
Error: Process completed with exit code 143.
```

Running the just step within a bash shell, ensure that the error is
properly handled and propagated without terminating the just script.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.