Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Poly-commitment: more docs and comments #2642

Merged
merged 19 commits into from
Oct 2, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
19 commits
Select commit Hold shift + click to select a range
f6c698e
Poly-commitment: improve documentation regarding PolyComm
dannywillems Oct 2, 2024
a7b1f5d
Poly-commitment: comment DensePolynomialOrEvaluation
dannywillems Oct 2, 2024
027b586
Poly-commitment: additional comment/TODO.
dannywillems Oct 2, 2024
d829212
Poly-commitment: simply using utf8 greek letters in comment
dannywillems Oct 2, 2024
f54031c
Poly-commitment: reword documentation of PolynomialToCombine
dannywillems Oct 2, 2024
09f73b3
poly-commitment: add a TODO regarding naming polynomials
dannywillems Oct 2, 2024
2152c11
Poly-commitment: additional doc for ScaledChunkedPolynomial
dannywillems Oct 2, 2024
666d476
Poly-commitment: doc + example for to_dense_polynomial method
dannywillems Oct 2, 2024
1112bd1
Poly-commitment: n is never 0 as we treat only the non-empty case
dannywillems Oct 2, 2024
5130777
Poly-commitment: additional comments on combine_polys
dannywillems Oct 2, 2024
1b8e926
Poly-commitment: add documentation for combine_polys
dannywillems Oct 2, 2024
ae5fcaa
Correct typo in doc
marcbeunardeau88 Oct 2, 2024
0faab06
PC/IPA : more doc on opening proof
marcbeunardeau88 Oct 2, 2024
c88e9d1
add clone to eval
marcbeunardeau88 Oct 2, 2024
fed8fbc
PC/IPA/Test: use test's rng
marcbeunardeau88 Oct 2, 2024
ce2ace1
PC/IPA/test: open at random nb of points
marcbeunardeau88 Oct 2, 2024
271e3af
PC/IPA:fix doc
marcbeunardeau88 Oct 2, 2024
403fff7
Update poly-commitment/src/ipa.rs
dannywillems Oct 2, 2024
dd341e8
Update poly-commitment/src/ipa.rs
dannywillems Oct 2, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 13 additions & 2 deletions poly-commitment/src/commitment.rs
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,17 @@ use serde_with::{
};
use std::{iter::Iterator, marker::PhantomData};

/// A polynomial commitment.
/// Represent a polynomial commitment when the type is instantiated with a
/// curve.
///
/// The structure also handles chunking, i.e. when we aim to handle polynomials
/// whose degree is higher than the SRS size. For this reason, we do use a
/// vector for the field `elems`.
///
/// Note that the parameter `C` is not constrained to be a curve, therefore in
/// some places in the code, `C` can refer to a scalar field element. For
/// instance, `PolyComm<G::ScalarField>` is used to represent the evaluation of the
/// polynomial bound by a specific commitment, at a particular evaluation point.
#[serde_as]
#[derive(Clone, Debug, Serialize, Deserialize, PartialEq, Eq)]
#[serde(bound = "C: CanonicalDeserialize + CanonicalSerialize")]
Expand Down Expand Up @@ -160,7 +170,7 @@ impl<A: Copy + Clone + CanonicalDeserialize + CanonicalSerialize> PolyComm<A> {
/// |g: G, x: G::ScalarField| g.scale(2*x + 2^n)
/// ```
///
/// otherwise. So, if we want to actually scale by `s`, we need to apply the
/// otherwise. So, if we want to actually scale by `x`, we need to apply the
/// inverse function of `|x| x + 2^n` (or of `|x| 2*x + 2^n` in the other case),
/// before supplying the scalar to our in-circuit scalar-multiplication
/// function. This computes that inverse function.
Expand Down Expand Up @@ -466,6 +476,7 @@ pub fn combined_inner_product<F: PrimeField>(
}

/// Contains the evaluation of a polynomial commitment at a set of points.
#[derive(Clone)]
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it still required?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nope

pub struct Evaluation<G>
where
G: AffineRepr,
Expand Down
117 changes: 91 additions & 26 deletions poly-commitment/src/ipa.rs
Original file line number Diff line number Diff line change
Expand Up @@ -31,14 +31,21 @@ use serde::{Deserialize, Serialize};
use serde_with::serde_as;
use std::{cmp::min, collections::HashMap, iter::Iterator, ops::AddAssign};

// A formal sum of the form
// `s_0 * p_0 + ... s_n * p_n`
// where each `s_i` is a scalar and each `p_i` is a polynomial.
/// A formal sum of the form
/// `s_0 * p_0 + ... s_n * p_n`
/// where each `s_i` is a scalar and each `p_i` is a polynomial.
/// The parameter `P` is expected to be the coefficients of the polynomial
/// `p_i`, even though we could treat it as the evaluations.
///
/// This hypothesis is important if `to_dense_polynomial` is called.
#[derive(Default)]
struct ScaledChunkedPolynomial<F, P>(Vec<(F, P)>);

/// Represent a polynomial either with its coefficients or its evaluations
pub enum DensePolynomialOrEvaluations<'a, F: FftField, D: EvaluationDomain<F>> {
/// Polynomial represented by its coefficients
DensePolynomial(&'a DensePolynomial<F>),
/// Polynomial represented by its evaluations over a domain D
Evaluations(&'a Evaluations<F, D>, D),
}

Expand All @@ -49,14 +56,25 @@ impl<F, P> ScaledChunkedPolynomial<F, P> {
}

impl<'a, F: Field> ScaledChunkedPolynomial<F, &'a [F]> {
/// Compute the resulting scaled polynomial.
/// Example:
/// Given the two polynomials `1 + 2X` and `3 + 4X`, and the scaling
/// factors `2` and `3`, the result is the polynomial `11 + 16X`.
/// ```text
/// 2 * [1, 2] + 3 * [3, 4] = [2, 4] + [9, 12] = [11, 16]
/// ```
fn to_dense_polynomial(&self) -> DensePolynomial<F> {
// Note: using a reference to avoid reallocation of the result.
let mut res = DensePolynomial::<F>::zero();

let scaled: Vec<_> = self
.0
.par_iter()
.map(|(scale, segment)| {
let scale = *scale;
// We simply scale each coefficients.
// It is simply because DensePolynomial doesn't have a method
// `scale`.
let v = segment.par_iter().map(|x| scale * *x).collect();
DensePolynomial::from_coefficients_vec(v)
})
Expand All @@ -70,22 +88,40 @@ impl<'a, F: Field> ScaledChunkedPolynomial<F, &'a [F]> {
}
}

/// Combine the polynomials using `polyscale`, creating a single unified
/// polynomial to open.
/// Combine the polynomials using a scalar (`polyscale`), creating a single
/// unified polynomial to open. This function also accepts polynomials in
/// evaluations form. In this case it applies an IFFT, and, if necessarry,
/// applies chunking to it (ie. split it in multiple polynomials of
/// degree less than the SRS size).
/// Parameters:
/// - plnms: vector of polynomial with optional degree bound and commitment randomness
/// - polyscale: scaling factor for polynomials
/// - plnms: vector of polynomials, either in evaluations or coefficients form.
/// The order of the output follows the order of this structure.
/// - polyscale: scalar to combine the polynomials, which will be scaled based
/// on the number of polynomials to combine.
///
/// Example:
/// Given the three polynomials `p1(X)`, and `p3(X)` in coefficients
/// forms, p2(X) in evaluation form,
/// and the scaling factor `s`, the result will be the polynomial:
///
/// ```text
/// p1(X) + s * i_fft(chunks(p2))(X) + s^2 p3(X)
/// ```
///
/// Additional complexity is added to handle chunks.
// TODO: move into utils? It is useful for multiple PCS
pub fn combine_polys<G: CommitmentCurve, D: EvaluationDomain<G::ScalarField>>(
plnms: PolynomialsToCombine<G, D>,
polyscale: G::ScalarField,
srs_length: usize,
) -> (DensePolynomial<G::ScalarField>, G::ScalarField) {
let mut plnm = ScaledChunkedPolynomial::<G::ScalarField, &[G::ScalarField]>::default();
// Initialising the output for the combined coefficients forms
let mut plnm_coefficients =
ScaledChunkedPolynomial::<G::ScalarField, &[G::ScalarField]>::default();
// Initialising the output for the combined evaluations forms
let mut plnm_evals_part = {
// For now just check that all the evaluation polynomials are the same
// degree so that we can do just a single FFT.
// Furthermore we check they have size less than the SRS size so we
// don't have to do chunking.
// If/when we change this, we can add more complicated code to handle
// different degrees.
let degree = plnms
Expand All @@ -106,9 +142,17 @@ pub fn combine_polys<G: CommitmentCurve, D: EvaluationDomain<G::ScalarField>>(
let mut omega = G::ScalarField::zero();
let mut scale = G::ScalarField::one();

// iterating over polynomials in the batch
// Iterating over polynomials in the batch.
// Note that `omegas` are given as `PolyComm<G::ScalarField>`. They are
// evaluations.
// We do modify two different structures depending on the form of the
// polynomial we are currently processing: `plnm` and `plnm_evals_part`.
// We do need to treat both forms separately.
for (p_i, omegas) in plnms {
match p_i {
// Here we scale the polynomial in evaluations forms
// Note that based on the check above, sub_domain.size() always give
// the same value
DensePolynomialOrEvaluations::Evaluations(evals_i, sub_domain) => {
let stride = evals_i.evals.len() / sub_domain.size();
let evals = &evals_i.evals;
Expand All @@ -124,13 +168,14 @@ pub fn combine_polys<G: CommitmentCurve, D: EvaluationDomain<G::ScalarField>>(
}
}

// Here we scale the polynomial in coefficient forms
DensePolynomialOrEvaluations::DensePolynomial(p_i) => {
let mut offset = 0;
// iterating over chunks of the polynomial
for j in 0..omegas.elems.len() {
let segment = &p_i.coeffs[std::cmp::min(offset, p_i.coeffs.len())
..std::cmp::min(offset + srs_length, p_i.coeffs.len())];
plnm.add_poly(scale, segment);
plnm_coefficients.add_poly(scale, segment);

omega += &(omegas.elems[j] * scale);
scale *= &polyscale;
Expand All @@ -140,15 +185,22 @@ pub fn combine_polys<G: CommitmentCurve, D: EvaluationDomain<G::ScalarField>>(
}
}

let mut plnm = plnm.to_dense_polynomial();
// Now, we will combine both evaluations and coefficients forms

// plnm will be our final combined polynomial. We first treat the
// polynomials in coefficients forms, which is simply scaling the
// coefficients and add them.
let mut plnm = plnm_coefficients.to_dense_polynomial();

if !plnm_evals_part.is_empty() {
// n is the number of evaluations, which is a multiple of the
// domain size.
// We treat now each chunk.
let n = plnm_evals_part.len();
let max_poly_size = srs_length;
let num_chunks = if n == 0 {
1
} else {
n / max_poly_size + if n % max_poly_size == 0 { 0 } else { 1 }
};
// equiv to divceil, but unstable in rust < 1.73.
let num_chunks = n / max_poly_size + if n % max_poly_size == 0 { 0 } else { 1 };
// Interpolation on the whole domain, i.e. it can be d2, d4, etc.
plnm += &Evaluations::from_vec_and_domain(plnm_evals_part, D::new(n).unwrap())
.interpolate()
.to_chunked_polynomial(num_chunks, max_poly_size)
Expand Down Expand Up @@ -773,13 +825,16 @@ where
}

impl<G: CommitmentCurve> SRS<G> {
/// This function opens polynomial commitments in batch
/// - plnms: batch of polynomials to open commitments for with, optionally, max degrees
/// This function opens polynomials in batch at several points
/// - plnms: batch of polynomials to open commitments for
/// - elm: evaluation point vector to open the commitments at
/// - polyscale: polynomial scaling factor for opening commitments in batch
/// - evalscale: eval scaling factor for opening commitments in batch
/// - oracle_params: parameters for the random oracle argument
/// RETURN: commitment opening proof
/// - polyscale: used to combine polynomials for opening commitments in batch
/// (we will open the \sum_i polyscale^i * plnms.(i))
/// - evalscale: used to combine evaluations to open on only one point
/// - sponge: parameters for the random oracle argument
/// - rng: used for blinders for the zk property
/// A slight modification to the original protocol is done
/// when absorbing the first prover message.
#[allow(clippy::too_many_arguments)]
#[allow(clippy::type_complexity)]
#[allow(clippy::many_single_char_names)]
Expand All @@ -806,6 +861,9 @@ impl<G: CommitmentCurve> SRS<G> {
let padded_length = 1 << rounds;

// TODO: Trim this to the degree of the largest polynomial
// TODO: We do always suppose we have a power of 2 for the SRS in
// practice. Therefore, padding equals zero, and this code can be
// removed. Only a current test case uses a SRS with a non-power of 2.
let padding = padded_length - self.g.len();
let mut g = self.g.clone();
g.extend(vec![G::zero(); padding]);
Expand All @@ -816,8 +874,8 @@ impl<G: CommitmentCurve> SRS<G> {
// just the powers of a single point as in the original IPA, but rather
// a vector of linearly combined powers with `evalscale` as recombiner.
//
// b_init_j = sum_i r^i elm_i^j
// = zeta^j + evalscale * zeta^j omega^j
// b_init[j] = Σ_i evalscale^i elm_i^j
// = ζ^j + evalscale * ζ^j ω^j (in the specific case of opening)
let b_init = {
// randomise/scale the eval powers
let mut scale = G::ScalarField::one();
Expand All @@ -840,6 +898,13 @@ impl<G: CommitmentCurve> SRS<G> {
.map(|(a, b)| *a * b)
.fold(G::ScalarField::zero(), |acc, x| acc + x);

// Usually, the prover sends `combined_inner_product`` to the verifier
// So we should absorb `combined_inner_product``
// However it is more efficient in the recursion circuit
// to absorb a slightly modified version of it.
// As a reminder, in a recursive setting, the challenges are given as a public input
// and verified in the next iteration.
// See the `shift_scalar`` doc.
sponge.absorb_fr(&[shift_scalar::<G>(combined_inner_product)]);

let t = sponge.challenge_fq();
Expand Down
4 changes: 3 additions & 1 deletion poly-commitment/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -146,7 +146,9 @@ pub trait SRS<G: CommitmentCurve>: Clone + Sized {
}

#[allow(type_alias_bounds)]
/// Vector of polynomials with commitment randomness (blinders).
/// Simply an alias to represent a polynomial with its commitment, possibly with
/// a blinder.
// TODO: add a string to name the polynomial
type PolynomialsToCombine<'a, G: CommitmentCurve, D: EvaluationDomain<G::ScalarField>> = &'a [(
DensePolynomialOrEvaluations<'a, G::ScalarField, D>,
PolyComm<G::ScalarField>,
Expand Down
79 changes: 42 additions & 37 deletions poly-commitment/tests/ipa_commitment.rs
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ use poly_commitment::{
ipa::{DensePolynomialOrEvaluations, SRS},
PolyComm, SRS as _,
};
use rand::{rngs::StdRng, SeedableRng};
use rand::Rng;
use std::array;

#[test]
Expand Down Expand Up @@ -242,7 +242,7 @@ fn test_opening_proof() {

// create an SRS
let srs = SRS::<VestaG>::create(20);
let rng = &mut StdRng::from_seed([0u8; 32]);
let mut rng = &mut o1_utils::tests::make_test_rng(None);

// commit the two polynomials
let commitment1 = srs.commit(&poly1, 1, rng);
Expand All @@ -266,38 +266,41 @@ fn test_opening_proof() {
commitment2.blinders,
),
];
let elm = vec![Fp::rand(rng), Fp::rand(rng)];

// Generate a random number of evaluation point
let nb_elem: u32 = rng.gen_range(1..7);
let elm: Vec<Fp> = (0..nb_elem).map(|_| Fp::rand(&mut rng)).collect();
let opening_proof = srs.open(&group_map, &polys, &elm, v, u, sponge.clone(), rng);

// evaluate the polynomials at these two points
let poly1_chunked_evals = vec![
poly1
.to_chunked_polynomial(1, srs.g.len())
.evaluate_chunks(elm[0]),
poly1
.to_chunked_polynomial(1, srs.g.len())
.evaluate_chunks(elm[1]),
];
// evaluate the polynomials at the points
let poly1_chunked_evals: Vec<Vec<Fp>> = elm
.iter()
.map(|elmi| {
poly1
.to_chunked_polynomial(1, srs.g.len())
.evaluate_chunks(*elmi)
})
.collect();

fn sum(c: &[Fp]) -> Fp {
c.iter().fold(Fp::zero(), |a, &b| a + b)
}

assert_eq!(sum(&poly1_chunked_evals[0]), poly1.evaluate(&elm[0]));
assert_eq!(sum(&poly1_chunked_evals[1]), poly1.evaluate(&elm[1]));
for (i, chunks) in poly1_chunked_evals.iter().enumerate() {
assert_eq!(sum(chunks), poly1.evaluate(&elm[i]))
}

let poly2_chunked_evals = vec![
poly2
.to_chunked_polynomial(1, srs.g.len())
.evaluate_chunks(elm[0]),
poly2
.to_chunked_polynomial(1, srs.g.len())
.evaluate_chunks(elm[1]),
];
let poly2_chunked_evals: Vec<Vec<Fp>> = elm
.iter()
.map(|elmi| {
poly2
.to_chunked_polynomial(1, srs.g.len())
.evaluate_chunks(*elmi)
})
.collect();

assert_eq!(sum(&poly2_chunked_evals[0]), poly2.evaluate(&elm[0]));
assert_eq!(sum(&poly2_chunked_evals[1]), poly2.evaluate(&elm[1]));
for (i, chunks) in poly2_chunked_evals.iter().enumerate() {
assert_eq!(sum(chunks), poly2.evaluate(&elm[i]))
}

let evaluations = vec![
Evaluation {
Expand All @@ -318,16 +321,18 @@ fn test_opening_proof() {
combined_inner_product(&v, &u, &es)
};

// verify the proof
let mut batch = vec![BatchEvaluationProof {
sponge,
evaluation_points: elm.clone(),
polyscale: v,
evalscale: u,
evaluations,
opening: &opening_proof,
combined_inner_product,
}];

assert!(srs.verify(&group_map, &mut batch, rng));
{
// create the proof
let mut batch = vec![BatchEvaluationProof {
sponge,
evaluation_points: elm,
polyscale: v,
evalscale: u,
evaluations,
opening: &opening_proof,
combined_inner_product,
}];

assert!(srs.verify(&group_map, &mut batch, rng));
}
}