You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Update top-level README tables & status for a few now-active proposals. (cardano-foundation#338)
* Update top-level README tables & status for a few now-active proposals.
* update newly Active CIPs in README table as well
(missed this in my review)
Co-authored-by: Robert Phair <[email protected]>
@@ -53,17 +53,17 @@ A reference input is a transaction input, which is linked to a particular transa
53
53
54
54
- A referenced output must exist in the UTXO set.
55
55
- Any value on a referenced output is _not_ considered when balancing the transaction.
56
-
- The spending conditions on referenced outputs are _not_ checked, nor are the witnesses required to be present.
56
+
- The spending conditions on referenced outputs are _not_ checked, nor are the witnesses required to be present.
57
57
- i.e. validators are not required to pass (nor are the scripts themselves or redeemers required to be present at all), and signatures are not required for pubkey outputs.
58
58
- Referenced outputs are _not_ removed from the UTXO set if the transaction validates.
59
59
- Reference inputs _are_ visible to scripts.
60
60
61
61
For clarity, the following two behaviours which are present today are unchanged by this proposal:
62
62
63
-
1. Transactions must _spend_ at least one output.[^1]
63
+
1. Transactions must _spend_ at least one output.[^1]
64
64
2. Spending an output _does_ require the spending conditions to be checked.[^2]
65
65
66
-
[^1]: This restriction already exists, and is important. It seems unnecessary, since transactions must always pay fees and fees must come from somewhere, but fees could in principle be paid via reward withdrawals, so the requirement to spend a UTXO is relevant.
66
+
[^1]: This restriction already exists, and is important. It seems unnecessary, since transactions must always pay fees and fees must come from somewhere, but fees could in principle be paid via reward withdrawals, so the requirement to spend a UTXO is relevant.
67
67
[^2]: That is, this proposal does not change outputs or the spending of outputs, it instead adds a new way of _referring_ to outputs.
68
68
69
69
### Script context
@@ -74,7 +74,7 @@ The script context therefore needs to be augmented to contain information about
74
74
Changing the script context will require a new Plutus language version in the ledger to support the new interface.
75
75
The change in the new interface is: a _new_ field is added to the structure which contains the list of reference inputs.
76
76
77
-
The interface for old versions of the language will not be changed.
77
+
The interface for old versions of the language will not be changed.
78
78
Scripts with old versions cannot be spent in transactions that include reference inputs, attempting to do so will be a phase 1 transaction validation failure.
79
79
80
80
### Extra datums
@@ -93,35 +93,35 @@ The CDDL for transaction bodies will change to the following to reflect the new
The key idea of this proposal is to use UTXOs to carry information.
104
-
But UTXOs are currently a bad fit for distributing information.
105
-
Because of locality, we have to include outputs that we use in the transaction, and the only way we have of doing that is to _spend_ them - and a spent output cannot then be referenced by anything else.
103
+
The key idea of this proposal is to use UTXOs to carry information.
104
+
But UTXOs are currently a bad fit for distributing information.
105
+
Because of locality, we have to include outputs that we use in the transaction, and the only way we have of doing that is to _spend_ them - and a spent output cannot then be referenced by anything else.
106
106
To put it another way: outputs are resource-like, but information is not resource-like.
107
107
108
-
The solution is to add a way to _inspect_ ("reference") outputs without spending them.
109
-
This allows outputs to play double duty as resource containers (for the value they carry) and information containers (for the data they carry).
108
+
The solution is to add a way to _inspect_ ("reference") outputs without spending them.
109
+
This allows outputs to play double duty as resource containers (for the value they carry) and information containers (for the data they carry).
110
110
111
111
### Requirements
112
112
113
113
We have a number of requirements that we need to fulfil.
114
114
- Determinism
115
115
- It must be possible to predict the execution of scripts precisely, given the transaction.
116
-
- Locality
116
+
- Locality
117
117
- All data involved in transaction validation should be included in the transaction or the outputs which it spends (or references).
118
118
- Non-interference
119
119
- As far as possible, transactions should not interfere with others. The key exception is when transactions consume resources that other transactions want (usually by consuming UTXO entries).
120
120
- Replay protection
121
121
- The system should not be attackable (e.g. allow unexpected data reads) by replaying old traffic.
122
-
- Storage control and garbage-collection incentives
122
+
- Storage control and garbage-collection incentives
123
123
- The amount of storage required by the system should have controls that prevent it from overloading nodes, and ideally should have incentives to shrink the amount of storage that is used over time.
124
-
- Optimized storage
124
+
- Optimized storage
125
125
- The system should be amenable to optimized storage solutions.
126
126
- Data transfer into scripts
127
127
- Scripts must have a way to observe the data.
@@ -193,7 +193,7 @@ This is actually a very important feature.
193
193
Since anyone can lock an output with any address, addresses are not that useful for identifying _particular_ outputs on chain, and instead we usually rely on looking for particular tokens in the value locked by the output.
194
194
Hence, if a script is interested in referring to the data attached to a _particular_ output, it will likely want to look at the value that is locked in the output.
195
195
196
-
For example, an oracle provider would need to distinguish the outputs that they create (with good data) from outputs created by adversaries (with bad data).
196
+
For example, an oracle provider would need to distinguish the outputs that they create (with good data) from outputs created by adversaries (with bad data).
197
197
They can do this with a token, so long as scripts can then see the token!
@@ -19,10 +19,10 @@ This will allow much simpler communication of datum values between users.
19
19
20
20
## Motivation
21
21
22
-
Conceptually, datums are pieces of data that are attached to outputs.
22
+
Conceptually, datums are pieces of data that are attached to outputs.
23
23
However, in practice datums are implemented by attaching _hashes_ of datums to outputs, and requiring that the spending transaction provides the actual datum.
24
24
25
-
This is quite inconvenient for users.
25
+
This is quite inconvenient for users.
26
26
Datums tend to represent the result of computation done by the party who creates the output, and as such there is almost no chance that the spending party will know the datum without communicating with the creating party.
27
27
That means that either the datum must be communicated between parties off-chain, or communicated on-chain by including it in the witness map of the transaction that creates the output ("extra datums").
28
28
This is also inconvenient for the spending party, who must watch the whole chain to spot it.
@@ -39,7 +39,7 @@ Transaction outputs are changed so that the datum field can contain either a has
39
39
40
40
The min UTXO value for an output with an inline datum depends on the size of the datum, following the `coinsPerUTxOWord` protocol parameter.
41
41
42
-
When an output with an inline datum is spent, the spending transaction does not need to provide the datum itself.
42
+
When an output with an inline datum is spent, the spending transaction does not need to provide the datum itself.
43
43
44
44
### Script context
45
45
@@ -52,7 +52,7 @@ There are two changes in the new version of the interface:
52
52
- The datum field on transaction outputs can either be a hash or the actual datum.
53
53
- The datum field on transaction inputs can either be a hash or the actual datum.
54
54
55
-
The interface for old versions of the language will not be changed.
55
+
The interface for old versions of the language will not be changed.
56
56
Scripts with old versions cannot be spent in transactions that include inline datums, attempting to do so will be a phase 1 transaction validation failure.
57
57
58
58
### CDDL
@@ -78,10 +78,10 @@ Since inline datums change very little about the model apart from where data is
78
78
79
79
### UTXO set size
80
80
81
-
This proposal gives users a way to put much larger amounts of data into the UTXO set.
81
+
This proposal gives users a way to put much larger amounts of data into the UTXO set.
82
82
Won’t this lead to much worse UTXO set bloat?
83
83
84
-
The answer is that we already have a mechanism to discourage this, namely the minimum UTXO value.
84
+
The answer is that we already have a mechanism to discourage this, namely the minimum UTXO value.
85
85
If inline datums turns out to drive significantly increased space usage, then we may need to increase `coinsPerUTxOWord` in order to keep the UTXO size down.
86
86
That will be costly and inconvenient for users, but will still allow them to use inline datums where they are most useful and the cost is bearable.
87
87
Furthermore, we hope that we will in fact be able to _reduce_`coinsPerUTxOWord` when the upcoming work on moving the UTXO mostly to on-disk storage is complete.
@@ -165,4 +165,4 @@ Hence we choose both option 1s and do _not_ provide backwards compatibility for
165
165
166
166
## References
167
167
168
-
[1]: Chakravarty, Manuel MT, et al. "The extended UTXO model."
168
+
[1]: Chakravarty, Manuel MT, et al. "The extended UTXO model."
@@ -26,7 +26,7 @@ Script sizes pose a significant problem. This manifests itself in two ways:
26
26
27
27
We would like to alleviate these problems.
28
28
29
-
The key idea is to use reference inputs and modified outputs which carry actual scripts ("reference scripts"), and allow such reference scripts to satisfy the script witnessing requirement for a transaction.
29
+
The key idea is to use reference inputs and modified outputs which carry actual scripts ("reference scripts"), and allow such reference scripts to satisfy the script witnessing requirement for a transaction.
30
30
This means that the transaction which _uses_ the script will not need to provide it at all, so long as it referenced an output which contained the script.
31
31
32
32
## Specification
@@ -46,10 +46,10 @@ Changing the script context will require a new Plutus language version in the le
46
46
The change is: a new optional field is added to outputs and inputs to represent reference scripts.
47
47
Reference scripts are represented by their hash in the script context.
48
48
49
-
The interface for old versions of the language will not be changed.
49
+
The interface for old versions of the language will not be changed.
50
50
Scripts with old versions cannot be spent in transactions that include reference scripts, attempting to do so will be a phase 1 transaction validation failure.
51
51
52
-
### CDDL
52
+
### CDDL
53
53
54
54
The CDDL for transaction outputs will change as follows to reflect the new field.
55
55
```
@@ -60,7 +60,7 @@ transaction_output =
60
60
, ? ref_script : plutus_script
61
61
]
62
62
```
63
-
TODO: can we use a more generic type that allows _any_ script in a forwards-compatible way?
63
+
TODO: can we use a more generic type that allows _any_ script in a forwards-compatible way?
64
64
65
65
## Rationale
66
66
@@ -89,11 +89,11 @@ This is clearly not what you want: the reference script could be anything, perha
89
89
90
90
With inline datums, we could put reference scripts in the datum field of outputs.
91
91
92
-
This approach has two problems.
92
+
This approach has two problems.
93
93
First, there is a representation confusion: we would need some way to know that a particular datum contained a reference script.
94
94
We could do this implicitly, but it would be better to have an explicit marker.
95
95
96
-
Secondly, this prevents having an output which is locked by a script that needs a datum _and_ has a reference script in it.
96
+
Secondly, this prevents having an output which is locked by a script that needs a datum _and_ has a reference script in it.
97
97
While this is a more unusual situation, it's not out of the question.
98
98
For example, a group of users might want to use a Plutus-based multisig script to control the UTXO with a reference script in it.
@@ -28,7 +28,7 @@ Additionally, there cannot be more than *maxColInputs* (protocol parameter) inpu
28
28
29
29
However,
30
30
31
-
- Restriction #1 is problematic because hardcore dApp users rarely have UTXO entries that do not contain any tokens. To combat this, wallets have created a special wallet-dependent "collateral" UTXO to reserve for usage of collateral for dApps which is not a great UX.
31
+
- Restriction #1 is problematic because hardcore dApp users rarely have UTXO entries that do not contain any tokens. To combat this, wallets have created a special wallet-dependent "collateral" UTXO to reserve for usage of collateral for dApps which is not a great UX.
32
32
- Restriction #6 is problematic because wallets want to protect users from signing transactions with large collateral as they cannot verify whether or not the transaction will fail when submitted (especially true for hardware wallets)
@@ -20,17 +20,17 @@ This document describes the addition of a new Plutus builtin for serialising `Bu
20
20
21
21
## Motivation
22
22
23
-
As part of developing on-chain script validators for [the Hydra Head protocol](https://eprint.iacr.org/2020/299), we stumble across a peculiar need for on-chain scripts: we need to verify and compare digests obtained from hashing elements of the script's surrounding transaction.
23
+
As part of developing on-chain script validators for [the Hydra Head protocol](https://eprint.iacr.org/2020/299), we stumble across a peculiar need for on-chain scripts: we need to verify and compare digests obtained from hashing elements of the script's surrounding transaction.
24
24
25
25
In this particular context, those elements are transaction outputs (a.k.a. `TxOut`). While Plutus already provides built-in for hashing data-structure (e.g. `sha2_256 :: BuiltinByteString -> BuiltinByteString`), it does not provide generic ways of serialising some data type to `BuiltinByteString`.
26
26
27
-
In an attempt to pursue our work, we have implemented [an on-chain library (plutus-cbor)][plutus-cbor] for encoding data-types as structured [CBOR / RFC 8949][CBOR] in a _relatively efficient_ way (although still quadratic, it is as efficient as it can be with Plutus' available built-ins) and measured the memory and CPU cost of encoding `TxOut`**in a script validator on-chain**.
27
+
In an attempt to pursue our work, we have implemented [an on-chain library (plutus-cbor)][plutus-cbor] for encoding data-types as structured [CBOR / RFC 8949][CBOR] in a _relatively efficient_ way (although still quadratic, it is as efficient as it can be with Plutus' available built-ins) and measured the memory and CPU cost of encoding `TxOut`**in a script validator on-chain**.
28
28
29
29

30
30
31
31
The above graph shows the memory and CPU costs **relative against a baseline**, of encoding a `TxOut` using `plutus-cbor` in function of the number of assets present in that `TxOut`. The costs on the y-axis are relative to the maximum execution budgets (as per mainnet's parameters, December 2021) allowed for a single script execution. As can be seen, this is of linear complexity, i.e. O(n) in terms of the number of assets. These results can be reproduced using the [encoding-cost][] executable in our repository.
32
32
33
-
> Note that we have also calculated similar costs for ada-only `TxOut`, in function of the number of `TxOut` which is about twice as worse but of similar linear shape.
33
+
> Note that we have also calculated similar costs for ada-only `TxOut`, in function of the number of `TxOut` which is about twice as worse but of similar linear shape.
34
34
35
35
We we can see on the graph, the cost is manageable for a small number of assets (or equivalently, a small number of outputs) but rapidly becomes limiting. Ideally, we would prefer the transaction size to be the limiting factor when it comes to the number of outputs we can handle in a single validation.
36
36
@@ -65,7 +65,7 @@ plutus_data =
65
65
/ [ * plutus_data ]
66
66
/ big_int
67
67
/ bounded_bytes
68
-
68
+
69
69
constr<a>=
70
70
#6.121([])
71
71
/#6.122([a])
@@ -76,7 +76,7 @@ constr<a> =
76
76
/#6.127([a, a, a, a, a, a])
77
77
; similarly for tag range:#6.1280..#6.1400 inclusive
78
78
/#6.102([uint, [* a]])
79
-
79
+
80
80
big_int = int / big_uint / big_nint
81
81
big_uint =#6.2(bounded_bytes)
82
82
big_nint =#6.3(bounded_bytes)
@@ -95,18 +95,18 @@ The `Data` type is a recursive data-type, so costing it properly is a little tri
95
95
We propose to re-use this instance to define a cost model linear in the size of data defined by this instance. What remains is to find a proper coefficient and offset for that linear model. To do so, we can benchmark the execution costs of encoding arbitrarily generated `Data` of various sizes, and retro-fit the cost into a linear model (provided that the results are still attesting for that type of model).
96
96
97
97
Benchmarking and costing `serialiseData` was done in [this PR](https://github.com/input-output-hk/plutus/pull/4480) according to this strategy. As the benchmark is not very uniform, because some cases of `Data` "structures" differ in CPU time taken to process, the linear model is used as an **upper bound** and thus conservatively overestimating actual costs.
98
-
98
+
99
99
## Rationale
100
100
101
101
* Easy to implement as it reuses existing code of the Plutus codebase;
102
102
* Such built-in is generic enough to also cover a wider set of use-cases, while nicely fitting ours;
103
103
* Favoring manipulation of structured `Data` is an appealing alternative to many `ByteString` manipulation use-cases;
104
104
* CBOR as encoding is a well-known and widely used standard in Cardano, existing tools can be used;
105
105
* The hypothesis on the cost model here is that serialisation cost would be proportional to the `ExMemoryUsage` for `Data`; which means, given the current implementation, proportional to the number and total memory usage of nodes in the `Data` tree-like structure.
106
-
* Benchmarking the costs of serialising `TxOut` values between [plutus-cbor][] and [cborg][] confirms [cborg][] and the existing [encodeData][]'s implementation in Plutus as a great candidate for implementing the built-in:
106
+
* Benchmarking the costs of serialising `TxOut` values between [plutus-cbor][] and [cborg][] confirms [cborg][] and the existing [encodeData][]'s implementation in Plutus as a great candidate for implementing the built-in:
107
107
108
108

109
-
109
+
110
110
Results can be reproduced with the [plutus-cbor benchmark][].
111
111
112
112
## Path To Active
@@ -124,7 +124,7 @@ Benchmarking and costing `serialiseData` was done in [this PR](https://github.co
124
124
125
125
## Backward Compatibility
126
126
127
-
* Additional built-in: so can be added to PlutusV1 and PlutusV2 without breaking any existing script validators. A hard-fork is however required as it would makes more blocks validate.
127
+
* Additional built-in: so can be added to PlutusV1 and PlutusV2 without breaking any existing script validators. A hard-fork is however required as it would makes more blocks validate.
0 commit comments