Skip to content

Conversation

cryptoquick
Copy link

This spent several months gathering feedback from the mailing list and from other advisors. This is hopefully polished enough to submit upstream.

Let me know if you have any questions or feedback, and of course feel free to submit suggestions.

Thank you for your time.

@cryptoquick cryptoquick marked this pull request as draft September 27, 2024 18:18
Copy link
Member

@jonatack jonatack left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting (the question of resistance to quantum computing may have resurged lately with the publication of https://scottaaronson.blog/?p=8329, see also https://x.com/n1ckler/status/1839215426091249778).

@cryptoquick cryptoquick force-pushed the p2qrh branch 2 times, most recently from b6ed2c3 to d6d15ad Compare September 28, 2024 18:01
@jonatack
Copy link
Member

jonatack commented Oct 1, 2024

@cryptoquick Can you begin to write up the sections currently marked as TBD, along with a backwards compatibility section (to describe incompatibilities, severity, and suggest mitigations, where applicable/relevant)? We've begun to reserve a range of BIP numbers for this topic, pending continued progress here.

@jonatack jonatack added the PR Author action required Needs updates, has unaddressed review comments, or is otherwise waiting for PR author label Oct 9, 2024
@jonatack
Copy link
Member

@cryptoquick ping for an update here. Have you seen https://groups.google.com/g/bitcoindev/c/p8xz08YTvkw / https://github.com/chucrut/bips/blob/master/bip-xxxx.md? It may be interesting to review each other and possibly collaborate.

@conduition
Copy link

Hey @EthanHeilman , I wanted to cross-link this post on delving. Adding a new tapleaf version with dynamically endorsed script leafs may give us a way to start migrating people early, even before ml-dsa and slh-dsa opcodes are defined, but still allow the use of those opcodes once they're spec'd out and deployed.

That said, if we can package those opcodes together alongside BIP360, i still think that'd be a better option. It will lead to less complexity and confusion overall.

@EthanHeilman
Copy link
Contributor

@murchandamus We are putting up the PQ signature BIP soon. Would you rather it be part of this PR or a new PR?

Remove dashes in BIP numbers and change to SegWit version 2
@leviwinks
Copy link

i have a suggestion @EthanHeilman
[email protected]

- Witness program calculated as SHA256 of binary encoding of PI
@murchandamus
Copy link
Contributor

@murchandamus We are putting up the PQ signature BIP soon. Would you rather it be part of this PR or a new PR?

Hey @EthanHeilman and @cryptoquick, given the amount of comments this PR already has, I think it would be clearer to have a separate PR for the companion BIP.

@jonatack
Copy link
Member

Agree on a new BIP and keeping them focused. A range of BIP numbers was reserved for a series on this topic.

Adding PQ signatures via a tapleaf version increase does not introduce any new opcodes and allows previously written tapscript programs to be used with PQ signatures
by simply using the new tapleaf version. Instead of developers explicitly specifying the intended signature algorithm through an opcode, the algorithm
to use must be indicated within the public key or public key hash<ref>'''Why not have CHECKSIG infer the algorithm based on signature size?''' Each of the three signature algorithms, Schnorr, ML-DSA, and SLH-DSA, have unique signature sizes. The problem with using signature size to infer algorithm is that spender specifies the signature. This would allow a public key which was intended to be verified by Schnorr to be verified using ML-DSA as the spender specified a ML-DSA signature. Signature algorithms are often not secure if you can mix and match public key and signature across algorithms.</ref>.
The disadvantage of this approach is that it requires a new tapleaf version each time we want to add a new signature algorithm.
Copy link

@conduition conduition Aug 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can add new signature algos in the future as a soft fork without a new tapscript version by coding an "always succeed" path in the new tapscript version's OP_CHECKSIG implementation.

For example, let's say the new multi-algo version of OP_CHECKSIG chooses signature algo based on a version byte header in the pubkey. 0x00 for Schnorr, 0x01 for ML-DSA, 0x02 for SLH-DSA. Define any other public-key version byte as being an "auto-succeed" sigalg type. Adding a new algorithm in the future is as easy as redefining one such "sigalg" version.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can add new signature algos in the future as a soft fork without a new tapscript version by coding an "always succeed" path in the new tapscript version's OP_CHECKSIG implementation.

This works as long as the signatures are smaller than the max stack element size of 520-bytes. Unfortunately both SLH_DSA and ML_DSA are less over the max stack element size.

The precedence, at a rough level, works like:

  1. IF Witness version not recognized --> return SUCCESS
  2. IF Witness version == 1 and tapleaf version not recognized --> return SUCCESS
  3. IF tapscript contains OP_SUCCESSx opcode --> return SUCCESS
  4. IF stack item size > MAX_SCRIPT_ELEMENT_SIZE in witness stack --> return FAIL
  5. Execute tapscript on witness stack, if OP_CHECKSIG has pubkey of size != 32 or 0 --> return SUCCESS

We use OP_SUCCESSx for new opcodes, but if we wanted to repurpose OP_CHECKSIG we would need to use a new tapleaf version or a new witness version.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry maybe I should clarify. I'm aware of the script size limits and their effects. I'm saying that, once we add a new tapscript version, we get an opportunity to redefine how OP_CHECKSIG works, so we can add an "always succeed" path for pubkeys with an unrecognized format (e.g. sigalg version 0x03 and up).

Then, if/when we want to add new signature algos in the future (such as SQIsign), we don't need a third newer tapscript version.

So the statement "it requires a new tapleaf version each time we want to add a new signature algorithm" is not entirely correct.

Comment on lines +328 to +329
Both approaches must raise the stack element size limit. In the OP_SUCCESSx case, the increased size limit would only be effect for transaction outputs
that use of the new opcodes. Otherwise this stack element size limit increase would be a soft fork. If the tapleaf version is used, then the stack
Copy link

@conduition conduition Aug 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo fix:

Suggested change
Both approaches must raise the stack element size limit. In the OP_SUCCESSx case, the increased size limit would only be effect for transaction outputs
that use of the new opcodes. Otherwise this stack element size limit increase would be a soft fork. If the tapleaf version is used, then the stack
Both approaches must raise the stack element size limit. In the OP_SUCCESSx case, the increased size limit would only have effect for transaction outputs
that use the new opcodes. Otherwise this stack element size limit increase would be a hard fork. If the tapleaf version is used, then the stack

This complexity is one more reason to prefer the new tapscript version approach, IMO.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A new tapleaf version or witness version would require maintaining two versions of tapscript.

  1. This would be messier in the bitcoin-core code base and more likely to introduce an accidental hardfork or other bug. It wouldn't be terrible, but all things being equal, we choose the one simpler option.
  2. It always requires that developers care about tapleaf versions for opcode features. Loss of funds would result if the wrong version is used.

My personal take is that we should only use tapleaf versions for major rewrites of Bitcoin script. For instance GSR would be a great fit a new tapleaf version. It would likely have its own intrepreter.cpp file. Developers aren't going to confuse GSR script with tapscript.

Comment on lines +375 to +376
To prevent OP_DUP from creating an 8 MB stack by duplicating stack elements larger than 520 bytes we define OP_DUP to fail on stack
elements larger than 520 bytes. Note this change to OP_DUP is not consensus critical and does not require any sort of fork. This is

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would kill the classic pattern of OP_DUP OP_SHA256 <hash> OP_EQUALVERIFY OP_CHECKSIG for scripts using PQ signatures.

Could we not instead limit the total stack size to 520kb more explicitly?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does anyone do OP_DUP on signatures? This wouldn't break for PQ public keys.

Could we not instead limit the total stack size to 520kb more explicitly?

I personally would prefer that the limitation was expressed this way, but that is likely to be a highly controversial soft fork that requires carefully consideration of performance implications.

If you think such a soft fork can get activated, do it and we will use it in BIP 360. I worry that including this change in BIP 360 will reduce the chances of BIP 360 from activating to almost zero.

Copy link

@conduition conduition Aug 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, i posted this comment before I read the section where you talked about compressing ML-DSA pubkeys using a hash, and conjoining the ML-DSA pubkey and signature together. Please ignore

public keys in excess of 520 bytes. For instance:

* ML-DSA public keys are 1,312 bytes and signatures are 2,420 bytes
* SLH-DSA public keys are 32 bytes and signatures are 7,856 bytes

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm sure you're aware already, but by tuning SLH-DSA parameters down we can get signatures of 4 kilobytes or less, about on-par with ML-DSA, while still being usable securely for hundreds of millions of signatures, far more than any bitcoin key will ever need to sign. We can condense signatures even more using clever compression by the signer.

I think this would go a long way to making SLH-DSA more practical as an option. ML-DSA's main advantage then would not be its signature size, but its faster signing and verification times.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've looked at some of the XMSS schemes that provide very tunable numbers of signatures. It is a neat idea.

So far we have not included this in the BIP because of the design rationale of "Use standardized post-quantum signature algorithms." This is so we can benefit from all the other research, hardware rollouts and software support.

hundreds of millions of signatures, far more than any bitcoin key will ever need to sign

Where do you draw the line here. A lightning network channel would in theory use millions of signatures for one public key, but they probably should be using ML-DSA. I don't like having a special rule for one signature scheme, although hundreds of millions of signatures is unlikely to ever happen. But why not 1 million signatures, or 10,000 signatures. What's the right number.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where do you draw the line here

Great question. There's various rationales you could go by, but I would frame it like this:

  • We should pick some duration X, denominated in "years of repeated signing" (YORS).
  • Assume a wallet is somehow tricked into repeatedly signing random messages for an adversary using an SLH-DSA key.
  • Assume only a single benchmarked CPU is used to produce the signatures, and assume zero latency between victim and attacker.
  • After X years of repeated signing, the public key should still maintain at least $2^{128}$ security against the attacker forging any signatures.

I don't know what the magic number X is there. Realistically I don't see any wallet ever signing data continuously for more than a few years, but maybe others would prefer stronger guarantees. Anyway, this is a number we can more easily debate about.

Some suggestions:

  • Maybe X = 30 YORS to match human reproductive cycles - this is roughly the global average age of first childbearing.
  • Maybe X = 45 YORS to match the length of an average human's working life - Keys last your entire career.
  • Maybe X = 70 YORS to match an average human lifetime. Keys live as long as we do.

Before we pin this down, we should have a working SPHINCS implementation we can benchmark against. Then we can pin down one or more parameter sets to standardize based on its performance.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was having trouble using the official parameter exploration script, so I made a faster/easier ported version in python, if anyone is curious: https://gist.github.com/conduition/469725009397c08a2d40fb87c8ca7baa


To prevent OP_DUP from creating an 8 MB stack by duplicating stack elements larger than 520 bytes we define OP_DUP to fail on stack
elements larger than 520 bytes. Note this change to OP_DUP is not consensus critical and does not require any sort of fork. This is
because currently there is no way to get a stack element larger than 520 bytes onto the stack so triggering this rule is currently
Copy link

@conduition conduition Aug 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't forget about OP_OVER, OP_2OVER, OP_2DUP, OP_3DUP, OP_PICK, OP_IFDUP, and OP_TUCK which all copy stack items.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What would be the impact here?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you want to impose new size limits on stack item duplication, then we should extend the BIP's wording to cover not just OP_DUP but also any opcode which copies stack items. Here's my suggested wording:

To prevent OP_DUP and other opcodes from creating an 8 MB stack by duplicating stack elements larger than 520 bytes we modify all opcodes to fail if copying any stack elements larger than 520 bytes. Note this change is not consensus critical and does not require any sort of fork.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missed these! Thanks

Comment on lines +680 to +682
Commit-reveal schemes can only be spent from and to outputs that are not vulnerable to long-exposure quantum attacks, such as
P2PKH, P2SK, P2WPKH, etc... To use tapscript outputs with this system either a soft fork could disable the key path spend of P2TR outputs
or P2QRH could be used here as it does not have a key path spend and thus is not vulnerable to long-exposure quantum attacks.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Recently on the mailing list, we've had discussions about recovering coins from exposed pubkeys by using a BIP32 xpriv commit/reveal protocol. So we can rescue coins that are vulnerable to long-exposure attacks. It just requires a soft fork to disable regular EC spending without defined commit/reveal steps.

Copy link

@ariard ariard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the efforts on this proposal, answered back most of my previous comments, of which most can be resolved (still see long vs short exposure attack wonder) and added more. I’m in the “Design” section so far.


Seeing that the idea is to have CRYSTALS-Dilithium and SPHINCS++ proposed as the concrete signature algorithms. I might start to hack on an implementation for FALCON, of which the sigs size are smaller than Dilithium while being also lattice-based cryptography. It’s an interesting thing to hack on. Libbitcoinpqc is in C, so perfect.


This document is licensed under the 3-clause BSD license.

=== Motivation ===
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By comparison BIP340 is reasonably wordily on the security properties of the standardized signature scheme:
https://github.com/bitcoin/bips/blob/master/bip-0340.mediawiki

See the arguments on provable security, non-malleability and linearity.

I’m not sure if you’re really familiar with post-quantum cryptography (e.g https://arxiv.org/pdf/1710.10377 is a good starter), though there are really few papers doing cryptanalysis of quantum signature scheme in the Bitcoin setting. While of course one can object why it matters to do cryptanalysis i_n the Bitcoin setting_, one should remind itself that some argued property in ECC literature (e.g signature non-malleability) might have tremendous downsides if not well understood (e.g breaking BIP141 “trust-free unconfirmed transaction dependency chain”).

That’s the “motivation” section is for now very drafty and over-verbose, sure but that why it’s still a draft.

Comment on lines +134 to +138
* P2PK outputs (Satoshi's coins, CPU miners, starts with 04)
* Reused addresses (any type, except P2QRH)
* Taproot addresses (starts with bc1p)
* Extended public keys, commonly known as "xpubs"
* Wallet descriptors
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comments there on trying to define “long exposure” vs “short exposure” attacks:
https://github.com/bitcoin/bips/pull/1670/files#r1966885340


It's for the above reason that, for those who wish to be prepared for quantum emergency, it is recommended that no more
than 50 bitcoin are kept under a single, distinct, unused Native SegWit (P2WPKH, "bc1q") address at a time. This is
assuming that the attacker is financially motivated instead of, for example, a nation state looking to break confidence
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For a small side-note, SWIFT network also uses asymmetric cryptography for its operations:
https://www.swift.com/myswift/services/training/swift-training-catalogue/browse-swift-training-catalogue/swiftnet-public-key-infrastructure-pki

So in case of CRQC becoming a reality, it’s not only Bitcoin who is in shit, neither also central banks, but also the worldwide traditional payment system.

Mostly, it’s just making the observations that analyzing the economic impact and attacker (economical) rational is far from straightforward, as soon as we get out of our Bitcoin bubble.

Taproot internal key, as it is not needed. Instead, a P2QRH output is just the 32-byte root of the tapleaf Merkle tree as defined
in [https://github.com/bitcoin/bips/blob/master/bip-0341.mediawiki BIP 341] and hashed with the tag "QuantumRoot" as shown below.

[[File:bip-0360/merkletree.png|center|550px|thumb|]]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The P2QRH scriptPubkey is saying SegWit version 3 ?

Shouldn’t be this SegWit version 2, given that BIP141 is SegWit version 0 and BIP341 is SegWit version 1.

Unless there is a social community convention somewhere, that we reserve SegWit version 2 for other purpose.

key in P2QRH the root is hashed by itself using the tag "QuantumRoot".

<source>
D = tagged_hash("TapLeaf", bytes([leaf_version]) + ser_script(script))
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could be tagged with “QuantumLeaf” or “QuantumBranch”.

Domain separation is a good thing.

Even in the case where a tapleaf validation is — all others things equal - the same between a P2TR and a P2QRH, preventing accidental misuage of a tapleaf by wallets or other bitcoin softwares among SegWit versions would be a good thing IMHO.

This is debatable, but somehow I think it’s good practice (— you wish to avoid a P2QRH spends being accidentally replayed on a P2TR and note the same signature digest than for BIP341/342 sounds to be proposed).

can be used in a quantum resistant manner. In a future BIP, we enable tapscript programs to verify two Post-Quantum (PQ) signature
algorithms, ML-DSA (CRYSTALS-Dilithium) and SLH-DSA (SPHINCS+). It is important to consider these two changes together because P2QRH must
be designed to support the addition of these PQ signature algorithms. The full description of these signatures will be provided in a future BIP.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rational footnotes at the end of the document sounds good, like done for BIP340 or BIP341.

prudence dictates we take such risks seriously and ensure that Bitcoin always has at least two secure signature algorithms built
on orthogonal cryptographic assumptions. In the event one algorithm is broken, an alternative will be available. An added benefit
is that parties seeking to securely store Bitcoin over decades can lock their coins under multiple algorithms,
ensuring their coins will not be stolen even in the face of a catastrophic break in one of those signature algorithms.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A contrario, this is more cryptographic consensus code that have to be integrated, where any implementation bug might be fatal….

Personally, I think it’s worth the risk to have multiple signature schemes in case of advances in post-quantum cryptanalysis (it’s all quite unchartered paths here…), but I believe it might be more debatable among the community and the industry…

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.