-
Notifications
You must be signed in to change notification settings - Fork 5.7k
BIP-360: QuBit - Pay to Quantum Resistant Hash #1670
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting (the question of resistance to quantum computing may have resurged lately with the publication of https://scottaaronson.blog/?p=8329, see also https://x.com/n1ckler/status/1839215426091249778).
b6ed2c3
to
d6d15ad
Compare
0608cc1
to
a595bf0
Compare
19d4592
to
7f4456d
Compare
@cryptoquick Can you begin to write up the sections currently marked as TBD, along with a backwards compatibility section (to describe incompatibilities, severity, and suggest mitigations, where applicable/relevant)? We've begun to reserve a range of BIP numbers for this topic, pending continued progress here. |
@cryptoquick ping for an update here. Have you seen https://groups.google.com/g/bitcoindev/c/p8xz08YTvkw / https://github.com/chucrut/bips/blob/master/bip-xxxx.md? It may be interesting to review each other and possibly collaborate. |
…apleaf merkle tree
Hey @EthanHeilman , I wanted to cross-link this post on delving. Adding a new tapleaf version with dynamically endorsed script leafs may give us a way to start migrating people early, even before ml-dsa and slh-dsa opcodes are defined, but still allow the use of those opcodes once they're spec'd out and deployed. That said, if we can package those opcodes together alongside BIP360, i still think that'd be a better option. It will lead to less complexity and confusion overall. |
Changed some dashes
@murchandamus We are putting up the PQ signature BIP soon. Would you rather it be part of this PR or a new PR? |
Remove dashes in BIP numbers and change to SegWit version 2
i have a suggestion @EthanHeilman |
- Witness program calculated as SHA256 of binary encoding of PI
Hey @EthanHeilman and @cryptoquick, given the amount of comments this PR already has, I think it would be clearer to have a separate PR for the companion BIP. |
Agree on a new BIP and keeping them focused. A range of BIP numbers was reserved for a series on this topic. |
Adding PQ signatures via a tapleaf version increase does not introduce any new opcodes and allows previously written tapscript programs to be used with PQ signatures | ||
by simply using the new tapleaf version. Instead of developers explicitly specifying the intended signature algorithm through an opcode, the algorithm | ||
to use must be indicated within the public key or public key hash<ref>'''Why not have CHECKSIG infer the algorithm based on signature size?''' Each of the three signature algorithms, Schnorr, ML-DSA, and SLH-DSA, have unique signature sizes. The problem with using signature size to infer algorithm is that spender specifies the signature. This would allow a public key which was intended to be verified by Schnorr to be verified using ML-DSA as the spender specified a ML-DSA signature. Signature algorithms are often not secure if you can mix and match public key and signature across algorithms.</ref>. | ||
The disadvantage of this approach is that it requires a new tapleaf version each time we want to add a new signature algorithm. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can add new signature algos in the future as a soft fork without a new tapscript version by coding an "always succeed" path in the new tapscript version's OP_CHECKSIG implementation.
For example, let's say the new multi-algo version of OP_CHECKSIG
chooses signature algo based on a version byte header in the pubkey. 0x00
for Schnorr, 0x01
for ML-DSA, 0x02
for SLH-DSA. Define any other public-key version byte as being an "auto-succeed" sigalg type. Adding a new algorithm in the future is as easy as redefining one such "sigalg" version.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can add new signature algos in the future as a soft fork without a new tapscript version by coding an "always succeed" path in the new tapscript version's OP_CHECKSIG implementation.
This works as long as the signatures are smaller than the max stack element size of 520-bytes. Unfortunately both SLH_DSA and ML_DSA are less over the max stack element size.
The precedence, at a rough level, works like:
- IF Witness version not recognized --> return SUCCESS
- IF Witness version == 1 and tapleaf version not recognized --> return SUCCESS
- IF tapscript contains OP_SUCCESSx opcode --> return SUCCESS
- IF stack item size > MAX_SCRIPT_ELEMENT_SIZE in witness stack --> return FAIL
- Execute tapscript on witness stack, if OP_CHECKSIG has pubkey of size != 32 or 0 --> return SUCCESS
We use OP_SUCCESSx for new opcodes, but if we wanted to repurpose OP_CHECKSIG we would need to use a new tapleaf version or a new witness version.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry maybe I should clarify. I'm aware of the script size limits and their effects. I'm saying that, once we add a new tapscript version, we get an opportunity to redefine how OP_CHECKSIG
works, so we can add an "always succeed" path for pubkeys with an unrecognized format (e.g. sigalg version 0x03
and up).
Then, if/when we want to add new signature algos in the future (such as SQIsign), we don't need a third newer tapscript version.
So the statement "it requires a new tapleaf version each time we want to add a new signature algorithm" is not entirely correct.
Both approaches must raise the stack element size limit. In the OP_SUCCESSx case, the increased size limit would only be effect for transaction outputs | ||
that use of the new opcodes. Otherwise this stack element size limit increase would be a soft fork. If the tapleaf version is used, then the stack |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typo fix:
Both approaches must raise the stack element size limit. In the OP_SUCCESSx case, the increased size limit would only be effect for transaction outputs | |
that use of the new opcodes. Otherwise this stack element size limit increase would be a soft fork. If the tapleaf version is used, then the stack | |
Both approaches must raise the stack element size limit. In the OP_SUCCESSx case, the increased size limit would only have effect for transaction outputs | |
that use the new opcodes. Otherwise this stack element size limit increase would be a hard fork. If the tapleaf version is used, then the stack |
This complexity is one more reason to prefer the new tapscript version approach, IMO.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A new tapleaf version or witness version would require maintaining two versions of tapscript.
- This would be messier in the bitcoin-core code base and more likely to introduce an accidental hardfork or other bug. It wouldn't be terrible, but all things being equal, we choose the one simpler option.
- It always requires that developers care about tapleaf versions for opcode features. Loss of funds would result if the wrong version is used.
My personal take is that we should only use tapleaf versions for major rewrites of Bitcoin script. For instance GSR would be a great fit a new tapleaf version. It would likely have its own intrepreter.cpp
file. Developers aren't going to confuse GSR script with tapscript.
To prevent OP_DUP from creating an 8 MB stack by duplicating stack elements larger than 520 bytes we define OP_DUP to fail on stack | ||
elements larger than 520 bytes. Note this change to OP_DUP is not consensus critical and does not require any sort of fork. This is |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would kill the classic pattern of OP_DUP OP_SHA256 <hash> OP_EQUALVERIFY OP_CHECKSIG
for scripts using PQ signatures.
Could we not instead limit the total stack size to 520kb more explicitly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does anyone do OP_DUP on signatures? This wouldn't break for PQ public keys.
Could we not instead limit the total stack size to 520kb more explicitly?
I personally would prefer that the limitation was expressed this way, but that is likely to be a highly controversial soft fork that requires carefully consideration of performance implications.
If you think such a soft fork can get activated, do it and we will use it in BIP 360. I worry that including this change in BIP 360 will reduce the chances of BIP 360 from activating to almost zero.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, i posted this comment before I read the section where you talked about compressing ML-DSA pubkeys using a hash, and conjoining the ML-DSA pubkey and signature together. Please ignore
public keys in excess of 520 bytes. For instance: | ||
|
||
* ML-DSA public keys are 1,312 bytes and signatures are 2,420 bytes | ||
* SLH-DSA public keys are 32 bytes and signatures are 7,856 bytes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm sure you're aware already, but by tuning SLH-DSA parameters down we can get signatures of 4 kilobytes or less, about on-par with ML-DSA, while still being usable securely for hundreds of millions of signatures, far more than any bitcoin key will ever need to sign. We can condense signatures even more using clever compression by the signer.
I think this would go a long way to making SLH-DSA more practical as an option. ML-DSA's main advantage then would not be its signature size, but its faster signing and verification times.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've looked at some of the XMSS schemes that provide very tunable numbers of signatures. It is a neat idea.
So far we have not included this in the BIP because of the design rationale of "Use standardized post-quantum signature algorithms." This is so we can benefit from all the other research, hardware rollouts and software support.
hundreds of millions of signatures, far more than any bitcoin key will ever need to sign
Where do you draw the line here. A lightning network channel would in theory use millions of signatures for one public key, but they probably should be using ML-DSA. I don't like having a special rule for one signature scheme, although hundreds of millions of signatures is unlikely to ever happen. But why not 1 million signatures, or 10,000 signatures. What's the right number.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where do you draw the line here
Great question. There's various rationales you could go by, but I would frame it like this:
- We should pick some duration
X
, denominated in "years of repeated signing" (YORS). - Assume a wallet is somehow tricked into repeatedly signing random messages for an adversary using an SLH-DSA key.
- Assume only a single benchmarked CPU is used to produce the signatures, and assume zero latency between victim and attacker.
- After
X
years of repeated signing, the public key should still maintain at least$2^{128}$ security against the attacker forging any signatures.
I don't know what the magic number X
is there. Realistically I don't see any wallet ever signing data continuously for more than a few years, but maybe others would prefer stronger guarantees. Anyway, this is a number we can more easily debate about.
Some suggestions:
- Maybe
X = 30 YORS
to match human reproductive cycles - this is roughly the global average age of first childbearing. - Maybe
X = 45 YORS
to match the length of an average human's working life - Keys last your entire career. - Maybe
X = 70 YORS
to match an average human lifetime. Keys live as long as we do.
Before we pin this down, we should have a working SPHINCS implementation we can benchmark against. Then we can pin down one or more parameter sets to standardize based on its performance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was having trouble using the official parameter exploration script, so I made a faster/easier ported version in python, if anyone is curious: https://gist.github.com/conduition/469725009397c08a2d40fb87c8ca7baa
|
||
To prevent OP_DUP from creating an 8 MB stack by duplicating stack elements larger than 520 bytes we define OP_DUP to fail on stack | ||
elements larger than 520 bytes. Note this change to OP_DUP is not consensus critical and does not require any sort of fork. This is | ||
because currently there is no way to get a stack element larger than 520 bytes onto the stack so triggering this rule is currently |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't forget about OP_OVER
, OP_2OVER
, OP_2DUP
, OP_3DUP
, OP_PICK
, OP_IFDUP
, and OP_TUCK
which all copy stack items.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What would be the impact here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you want to impose new size limits on stack item duplication, then we should extend the BIP's wording to cover not just OP_DUP but also any opcode which copies stack items. Here's my suggested wording:
To prevent OP_DUP and other opcodes from creating an 8 MB stack by duplicating stack elements larger than 520 bytes we modify all opcodes to fail if copying any stack elements larger than 520 bytes. Note this change is not consensus critical and does not require any sort of fork.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missed these! Thanks
Commit-reveal schemes can only be spent from and to outputs that are not vulnerable to long-exposure quantum attacks, such as | ||
P2PKH, P2SK, P2WPKH, etc... To use tapscript outputs with this system either a soft fork could disable the key path spend of P2TR outputs | ||
or P2QRH could be used here as it does not have a key path spend and thus is not vulnerable to long-exposure quantum attacks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Recently on the mailing list, we've had discussions about recovering coins from exposed pubkeys by using a BIP32 xpriv commit/reveal protocol. So we can rescue coins that are vulnerable to long-exposure attacks. It just requires a soft fork to disable regular EC spending without defined commit/reveal steps.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the efforts on this proposal, answered back most of my previous comments, of which most can be resolved (still see long vs short exposure attack wonder) and added more. I’m in the “Design” section so far.
Seeing that the idea is to have CRYSTALS-Dilithium and SPHINCS++ proposed as the concrete signature algorithms. I might start to hack on an implementation for FALCON, of which the sigs size are smaller than Dilithium while being also lattice-based cryptography. It’s an interesting thing to hack on. Libbitcoinpqc is in C, so perfect.
|
||
This document is licensed under the 3-clause BSD license. | ||
|
||
=== Motivation === |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By comparison BIP340 is reasonably wordily on the security properties of the standardized signature scheme:
https://github.com/bitcoin/bips/blob/master/bip-0340.mediawiki
See the arguments on provable security, non-malleability and linearity.
I’m not sure if you’re really familiar with post-quantum cryptography (e.g https://arxiv.org/pdf/1710.10377 is a good starter), though there are really few papers doing cryptanalysis of quantum signature scheme in the Bitcoin setting. While of course one can object why it matters to do cryptanalysis i_n the Bitcoin setting_, one should remind itself that some argued property in ECC literature (e.g signature non-malleability) might have tremendous downsides if not well understood (e.g breaking BIP141 “trust-free unconfirmed transaction dependency chain”).
That’s the “motivation” section is for now very drafty and over-verbose, sure but that why it’s still a draft.
* P2PK outputs (Satoshi's coins, CPU miners, starts with 04) | ||
* Reused addresses (any type, except P2QRH) | ||
* Taproot addresses (starts with bc1p) | ||
* Extended public keys, commonly known as "xpubs" | ||
* Wallet descriptors |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See comments there on trying to define “long exposure” vs “short exposure” attacks:
https://github.com/bitcoin/bips/pull/1670/files#r1966885340
|
||
It's for the above reason that, for those who wish to be prepared for quantum emergency, it is recommended that no more | ||
than 50 bitcoin are kept under a single, distinct, unused Native SegWit (P2WPKH, "bc1q") address at a time. This is | ||
assuming that the attacker is financially motivated instead of, for example, a nation state looking to break confidence |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For a small side-note, SWIFT network also uses asymmetric cryptography for its operations:
https://www.swift.com/myswift/services/training/swift-training-catalogue/browse-swift-training-catalogue/swiftnet-public-key-infrastructure-pki
So in case of CRQC becoming a reality, it’s not only Bitcoin who is in shit, neither also central banks, but also the worldwide traditional payment system.
Mostly, it’s just making the observations that analyzing the economic impact and attacker (economical) rational is far from straightforward, as soon as we get out of our Bitcoin bubble.
Taproot internal key, as it is not needed. Instead, a P2QRH output is just the 32-byte root of the tapleaf Merkle tree as defined | ||
in [https://github.com/bitcoin/bips/blob/master/bip-0341.mediawiki BIP 341] and hashed with the tag "QuantumRoot" as shown below. | ||
|
||
[[File:bip-0360/merkletree.png|center|550px|thumb|]] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The P2QRH scriptPubkey is saying SegWit version 3 ?
Shouldn’t be this SegWit version 2, given that BIP141 is SegWit version 0 and BIP341 is SegWit version 1.
Unless there is a social community convention somewhere, that we reserve SegWit version 2 for other purpose.
key in P2QRH the root is hashed by itself using the tag "QuantumRoot". | ||
|
||
<source> | ||
D = tagged_hash("TapLeaf", bytes([leaf_version]) + ser_script(script)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This could be tagged with “QuantumLeaf” or “QuantumBranch”.
Domain separation is a good thing.
Even in the case where a tapleaf validation is — all others things equal - the same between a P2TR and a P2QRH, preventing accidental misuage of a tapleaf by wallets or other bitcoin softwares among SegWit versions would be a good thing IMHO.
This is debatable, but somehow I think it’s good practice (— you wish to avoid a P2QRH spends being accidentally replayed on a P2TR and note the same signature digest than for BIP341/342 sounds to be proposed).
can be used in a quantum resistant manner. In a future BIP, we enable tapscript programs to verify two Post-Quantum (PQ) signature | ||
algorithms, ML-DSA (CRYSTALS-Dilithium) and SLH-DSA (SPHINCS+). It is important to consider these two changes together because P2QRH must | ||
be designed to support the addition of these PQ signature algorithms. The full description of these signatures will be provided in a future BIP. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rational footnotes at the end of the document sounds good, like done for BIP340 or BIP341.
prudence dictates we take such risks seriously and ensure that Bitcoin always has at least two secure signature algorithms built | ||
on orthogonal cryptographic assumptions. In the event one algorithm is broken, an alternative will be available. An added benefit | ||
is that parties seeking to securely store Bitcoin over decades can lock their coins under multiple algorithms, | ||
ensuring their coins will not be stolen even in the face of a catastrophic break in one of those signature algorithms. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A contrario, this is more cryptographic consensus code that have to be integrated, where any implementation bug might be fatal….
Personally, I think it’s worth the risk to have multiple signature schemes in case of advances in post-quantum cryptanalysis (it’s all quite unchartered paths here…), but I believe it might be more debatable among the community and the industry…
This spent several months gathering feedback from the mailing list and from other advisors. This is hopefully polished enough to submit upstream.
Let me know if you have any questions or feedback, and of course feel free to submit suggestions.
Thank you for your time.