Skip to content

Commit

Permalink
Merge pull request #2538 from o1-labs/dw/poly-commitment-more-docs-an…
Browse files Browse the repository at this point in the history
…d-change-description

Poly-commitment: add README and documentation
  • Loading branch information
dannywillems authored Sep 9, 2024
2 parents fbd15f0 + 512e663 commit a0b97af
Show file tree
Hide file tree
Showing 5 changed files with 59 additions and 37 deletions.
2 changes: 1 addition & 1 deletion poly-commitment/Cargo.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[package]
name = "poly-commitment"
version = "0.1.0"
description = "An implementation of an inner-product argument polynomial commitment scheme, as used in kimchi"
description = "Library implementing different polynomial commitments schemes, like IPA and KZG10"
repository = "https://github.com/o1-labs/proof-systems"
homepage = "https://o1-labs.github.io/proof-systems/"
documentation = "https://o1-labs.github.io/proof-systems/rustdoc/"
Expand Down
13 changes: 13 additions & 0 deletions poly-commitment/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# poly-commitment: implementations of multiple PCS

This library offers implementations of different Polynomial Commitment Scheme
(PCS) that can be used in Polynomial Interactive Oracle Proof (PIOP) like PlonK.

Currently, the following polynomial commitment schemes are implemented:
- [KZG10](./src/kzg.rs)
- [Inner Product Argument](./src/commitment.rs)

The implementations are made initially to be compatible with Kimchi (a Plonk-ish
variant with 15 wires and some custom gates) and to be used in the Mina
protocol. For instance, submodules are created to convert into OCaml to be used
in the Mina protocol codebase.
8 changes: 5 additions & 3 deletions poly-commitment/src/commitment.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@
//! The following functionality is implemented
//!
//! 1. Commit to polynomial with its max degree
//! 2. Open polynomial commitment batch at the given evaluation point and scaling factor scalar
//! producing the batched opening proof
//! 2. Open polynomial commitment batch at the given evaluation point and
//! scaling factor scalar producing the batched opening proof
//! 3. Verify batch of batched opening proofs
use crate::{
Expand Down Expand Up @@ -45,6 +45,7 @@ pub struct PolyComm<C> {
pub elems: Vec<C>,
}

/// A commitment to a polynomial with some blinding factors.
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct BlindedCommitment<G>
where
Expand Down Expand Up @@ -144,7 +145,8 @@ where
}

impl<A: Copy + CanonicalDeserialize + CanonicalSerialize> PolyComm<A> {
// TODO: if all callers end up calling unwrap, just call this zip_eq and panic here (and document the panic)
// TODO: if all callers end up calling unwrap, just call this zip_eq and
// panic here (and document the panic)
pub fn zip<B: Copy + CanonicalDeserialize + CanonicalSerialize>(
&self,
other: &PolyComm<B>,
Expand Down
32 changes: 17 additions & 15 deletions poly-commitment/src/evaluation_proof.rs
Original file line number Diff line number Diff line change
Expand Up @@ -70,10 +70,12 @@ pub fn combine_polys<G: CommitmentCurve, D: EvaluationDomain<G::ScalarField>>(
) -> (DensePolynomial<G::ScalarField>, G::ScalarField) {
let mut plnm = ScaledChunkedPolynomial::<G::ScalarField, &[G::ScalarField]>::default();
let mut plnm_evals_part = {
// For now just check that all the evaluation polynomials are the same degree so that we
// can do just a single FFT.
// Furthermore we check they have size less than the SRS size so we don't have to do chunking.
// If/when we change this, we can add more complicated code to handle different degrees.
// For now just check that all the evaluation polynomials are the same
// degree so that we can do just a single FFT.
// Furthermore we check they have size less than the SRS size so we
// don't have to do chunking.
// If/when we change this, we can add more complicated code to handle
// different degrees.
let degree = plnms
.iter()
.fold(None, |acc, (p, _)| match p {
Expand Down Expand Up @@ -146,24 +148,24 @@ pub fn combine_polys<G: CommitmentCurve, D: EvaluationDomain<G::ScalarField>>(

impl<G: CommitmentCurve> SRS<G> {
/// This function opens polynomial commitments in batch
/// plnms: batch of polynomials to open commitments for with, optionally, max degrees
/// elm: evaluation point vector to open the commitments at
/// polyscale: polynomial scaling factor for opening commitments in batch
/// evalscale: eval scaling factor for opening commitments in batch
/// oracle_params: parameters for the random oracle argument
/// RETURN: commitment opening proof
/// - plnms: batch of polynomials to open commitments for with, optionally, max degrees
/// - elm: evaluation point vector to open the commitments at
/// - polyscale: polynomial scaling factor for opening commitments in batch
/// - evalscale: eval scaling factor for opening commitments in batch
/// - oracle_params: parameters for the random oracle argument
/// RETURN: commitment opening proof
#[allow(clippy::too_many_arguments)]
#[allow(clippy::type_complexity)]
#[allow(clippy::many_single_char_names)]
pub fn open<EFqSponge, RNG, D: EvaluationDomain<G::ScalarField>>(
&self,
group_map: &G::Map,
// TODO(mimoo): create a type for that entry
plnms: PolynomialsToCombine<G, D>, // vector of polynomial with commitment randomness
elm: &[G::ScalarField], // vector of evaluation points
polyscale: G::ScalarField, // scaling factor for polynoms
evalscale: G::ScalarField, // scaling factor for evaluation point powers
mut sponge: EFqSponge, // sponge
plnms: PolynomialsToCombine<G, D>,
elm: &[G::ScalarField],
polyscale: G::ScalarField,
evalscale: G::ScalarField,
mut sponge: EFqSponge,
rng: &mut RNG,
) -> OpeningProof<G>
where
Expand Down
41 changes: 23 additions & 18 deletions poly-commitment/src/srs.rs
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,8 @@ pub struct SRS<G> {
#[serde_as(as = "o1_utils::serialization::SerdeAs")]
pub h: G,

// TODO: the following field should be separated, as they are optimization values
// TODO: the following field should be separated, as they are optimization
// values
/// Commitments to Lagrange bases, per domain size
#[serde(skip)]
pub lagrange_bases: HashMap<usize, Vec<PolyComm<G>>>,
Expand Down Expand Up @@ -96,8 +97,8 @@ impl<G: CommitmentCurve> SRS<G> {
self.g.len()
}

/// Compute commitments to the lagrange basis corresponding to the given domain and
/// cache them in the SRS
/// Compute commitments to the lagrange basis corresponding to the given
/// domain and cache them in the SRS
pub fn add_lagrange_basis(&mut self, domain: D<G::ScalarField>) {
let n = domain.size();

Expand Down Expand Up @@ -133,8 +134,8 @@ impl<G: CommitmentCurve> SRS<G> {
// Let v in V be the vector [ L_0, ..., L_{n - 1} ] where L_i is the i^{th}
// normalized Lagrange polynomial (where L_i(w^j) = j == i ? 1 : 0).
//
// Consider the rows of M(w) * v. Let me write out the matrix and vector so you
// can see more easily.
// Consider the rows of M(w) * v. Let me write out the matrix and vector
// so you can see more easily.
//
// | 1 1 1 ... 1 | | L_0 |
// | 1 w w^2 ... w^{n-1} | * | L_1 |
Expand All @@ -153,13 +154,14 @@ impl<G: CommitmentCurve> SRS<G> {
//
// Thus, M(w) * v is the vector u, where u = [ 1, x, x^2, ..., x^n ]
//
// Therefore, the IFFT algorithm, when applied to the vector u (the standard
// monomial basis) will yield the vector v of the (normalized) Lagrange polynomials.
// Therefore, the IFFT algorithm, when applied to the vector u (the
// standard monomial basis) will yield the vector v of the (normalized)
// Lagrange polynomials.
//
// Now, because the polynomial commitment scheme is additively homomorphic, and
// because the commitment to the polynomial x^i is just self.g[i], we can obtain
// commitments to the normalized Lagrange polynomials by applying IFFT to the
// vector self.g[0..n].
// Now, because the polynomial commitment scheme is additively
// homomorphic, and because the commitment to the polynomial x^i is just
// self.g[i], we can obtain commitments to the normalized Lagrange
// polynomials by applying IFFT to the vector self.g[0..n].
//
//
// Further still, we can do the same trick for 'chunked' polynomials.
Expand All @@ -169,15 +171,18 @@ impl<G: CommitmentCurve> SRS<G> {
// where each f_i has degree n-1.
//
// In the above, if we set u = [ 1, x^2, ... x^{n-1}, 0, 0, .., 0 ]
// then we effectively 'zero out' any polynomial terms higher than x^{n-1}, leaving
// us with the 'partial Lagrange polynomials' that contribute to f_0.
// then we effectively 'zero out' any polynomial terms higher than
// x^{n-1}, leaving us with the 'partial Lagrange polynomials' that
// contribute to f_0.
//
// Similarly, u = [ 0, 0, ..., 0, 1, x^2, ..., x^{n-1}, 0, 0, ..., 0] with n leading
// zeros 'zeroes out' all terms except the 'partial Lagrange polynomials' that
// contribute to f_1, and likewise for each f_i.
// Similarly, u = [ 0, 0, ..., 0, 1, x^2, ..., x^{n-1}, 0, 0, ..., 0]
// with n leading zeros 'zeroes out' all terms except the 'partial
// Lagrange polynomials' that contribute to f_1, and likewise for each
// f_i.
//
// By computing each of these, and recollecting the terms as a vector of polynomial
// commitments, we obtain a chunked commitment to the L_i polynomials.
// By computing each of these, and recollecting the terms as a vector of
// polynomial commitments, we obtain a chunked commitment to the L_i
// polynomials.
let srs_size = self.g.len();
let num_elems = (n + srs_size - 1) / srs_size;
let mut elems = Vec::with_capacity(num_elems);
Expand Down

0 comments on commit a0b97af

Please sign in to comment.