File tree Expand file tree Collapse file tree 1 file changed +1
-1
lines changed
src/app/blog/blake3-hazmat-api Expand file tree Collapse file tree 1 file changed +1
-1
lines changed Original file line number Diff line number Diff line change @@ -91,7 +91,7 @@ But if you only ever ask the poor thing for hashes of *individual* chunks, there
91
91
92
92
So to get the benefit of this aweomesness , we need to give the hash function multiple chunks to work with even when computing subtree hashes .
93
93
94
- Iroh - blobs is working with * chunk groups * of 16 chunks , so the most expensive hashing related computation going on in iroh - blobs when sending or receiving data is to compute the hash of a subtree consisting of 16 chunks .
94
+ ` iroh - blobs ` works with * chunk groups * of 16 chunks . When sending or receiving data , the most expensive hashing related computation going on in ` iroh - blobs ` is computing the hash of a subtree consisting of 16 chunks .
95
95
96
96
You can of course compute this sequentially using the primitives exposed by the guts API . But you only benefit from the parallelism of BLAKE3 if you give all chunks to the hasher all at once . This is exactly what our fork does . it added a fn to the guts api to hash an entire subtree :
97
97
You can’t perform that action at this time.
0 commit comments