Skip to content

Batched commitment of polys with different sizes on mpcs layer #864

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 8 commits into
base: master
Choose a base branch
from

Conversation

Jiangkm3
Copy link
Collaborator

@Jiangkm3 Jiangkm3 commented Mar 18, 2025

This PR enables ceno to batch commitment polynomials of different sizes on mpcs layer using any commitment scheme. The main contributions are:

  • pcs_batch_commit_diff_size, pcs_batch_commit_diff_size_and_write, pcs_batch_open_diff_size, and pcs_batch_verify_diff_size in mpcs/src/lib.rs that implements the batching.
  • run_diff_size_batch_commit_open_verify in mpcs/src/lib.rs and batch_commit_diff_size_open_verify() in mpcs/src/basefold.rs and mpcs/src/whir.rs for testing.

The idea is to use packing, which merges the polynomials into a sequence of large polynomials of a fixed size packed_polys. If the final few smallest polys cannot be fit into these packed polys, a final smaller polynomial final_poly is introduced. The process is then as follows:

  1. Commit: the prover commits all packed_polys together (since they are of the same size) and final_poly separately.
  2. Prove: the prover first uses a unify_sumcheck to reduce the different claim points on each poly down to the same point on each poly. It then performs packing to produce packed_polys and final_poly if exists. Finally it invokes the MPCS protocol to prove packed_polys and final_poly separately.
  3. Verify: the verifier checks the correctness of unify_sumcheck, packing, and the MPCS.

@Jiangkm3
Copy link
Collaborator Author

The current implementation is still rudimentary: it does not support parallel execution or has any optimization what so ever. I'm also awaiting for a PR on sumcheck to rewrite the sumcheck implementation to match with those in ceno_zkvm. However, this version is functioning and all tests passed.

@hero78119
Copy link
Collaborator

hero78119 commented Mar 19, 2025

Hi @Jiangkm3 could you elaborate primitive in your design from verifier perspective: giving

  • a unified pack point $r$
  • 2 polynomial pair ($eval_1$, $r$[..num_var_1]), ($eval_2$, $r$[..num_var_2]), num_var_1 != num_var_2 and both are arbitrary.

Then how is formula to combine both into a single (eval_3, num_var_3)? thanks

@Jiangkm3
Copy link
Collaborator Author

Jiangkm3 commented Mar 28, 2025

Updated the approach to match with the new suffix-alignment of sumcheck. See the new interleaving approach (

ceno/mpcs/src/lib.rs

Lines 80 to 84 in a0f20c0

// Given the sizes of a list of polys sorted in decreasing order,
// Compute which list each entry of their interleaved form belong to
// e.g.: [4, 2, 1, 1] => [0, 1, 0, 2, 0, 1, 0, 3]
// If the sizes do not sum up to a power of 2, use sizes.len() for paddings
// This is performed recursively: at each step, only interleave the polys between head..tail
).

@Jiangkm3
Copy link
Collaborator Author

New interleave function here:

ceno/mpcs/src/lib.rs

Lines 172 to 175 in 1c504a5

// Interleave the polys give their position on the binary tree
// Assume the polys are sorted by decreasing size
// Denote: N - size of the interleaved poly; M - num of polys
// This function performs interleave in O(M) + O(N) time and is *potentially* parallelizable (maybe? idk)

This allows the prover to compute interleave in O(num_polys) + O(interleave_size) time, which should be optimal.
This approach is also potentially parallelizable, although it wouldn't be easy. Still, much easier than the original approach above.

@Jiangkm3
Copy link
Collaborator Author

Parallel version of interleave now available:

ceno/mpcs/src/lib.rs

Lines 229 to 231 in 3d8b8b9

// Parallel version: divide interleaved_evaluation into chunks
#[cfg(feature = "parallel")]
fn interleave_polys<E: ExtensionField>(

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants