Skip to content

Conversation

@onur-ozkan
Copy link
Contributor

@onur-ozkan onur-ozkan commented Nov 18, 2025

Description

Sequence number creation and incrementing are now fallible instead of panicking.

Also, previous implementation had multiple issues such as casting u128 into u64 directly and using nanoseconds instead of milliseconds.

Change checklist

  • I have performed a self-review of my own code
  • I have made corresponding changes to the documentation
  • I have added tests that prove my fix is effective or that my feature works
  • A changelog entry has been made in the appropriate crates

Comment on lines +192 to +206
fn new() -> Result<Self, SequenceNumberError> {
let unix_timestamp = SystemTime::now()
.duration_since(SystemTime::UNIX_EPOCH)
.expect("time to be linear")
.as_nanos();
.map_err(|_| SequenceNumberError::ClockBeforeUnixEpoch)?
.as_millis();

Self(unix_timestamp as u64)
let timestamp = u64::try_from(unix_timestamp).map_err(|_| SequenceNumberError::Overflow)?;

Ok(Self(timestamp))
}

fn next(&mut self) -> u64 {
self.0 = self
.0
.checked_add(1)
.expect("to not exhaust u64 space for sequence numbers");
fn next(&mut self) -> Result<u64, SequenceNumberError> {
self.0 = self.0.checked_add(1).ok_or(SequenceNumberError::Overflow)?;

self.0
Ok(self.0)
Copy link
Member

@elenaf9 elenaf9 Nov 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally I agree that we should avoid all possible panics.
That said, I think in both cases here it's safe to assume that they won't panic. If the system time of a user is this far of they'll' run into major issues anyway. And I don't think the sequence number can ever go above 2**64.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am aware that panicking is unlikely here. I only added it as an improvement. The real issue was that the doc comment referred to milliseconds but the code was actually using nanoseconds.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yeah, you are right. Changing to milliseconds sounds good to me, or alternatively just fix the docs.

Technically, we do violate backwards compatibility because with this change the sequence number of all peers will "jump" back to a much lower value. By linearity increasing, a peer could eventually start re-using numbers that have been used in the past, which violates the specs: https://github.com/libp2p/specs/blob/69c4fdf5da3a07d2f392df6a892c07256c1885c0/pubsub/README.md?plain=1#L136-L142

That said, I don't think it will ever happen in practice. The old ns-based sequence numbers are a such huge fraction larger than the new ms-based ones will ever be. cc @jxs

Sequence number creation and incrementing are now fallible instead of
panicking.

Also, previous implementation had multiple issues such as casting
u128 into u64 directly and using nanoseconds instead of milliseconds.

Signed-off-by: Onur Özkan <[email protected]>
Signed-off-by: Onur Özkan <[email protected]>
@onur-ozkan onur-ozkan force-pushed the better-sequence-numbering branch from f41febf to d9ca9f7 Compare November 19, 2025 15:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants