Skip to content

Commit

Permalink
Documenting/clarifying commitments code
Browse files Browse the repository at this point in the history
  • Loading branch information
volhovm committed Jan 16, 2024
1 parent 6252a86 commit ca263f8
Show file tree
Hide file tree
Showing 3 changed files with 50 additions and 18 deletions.
2 changes: 1 addition & 1 deletion book/macros.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
\sample:{\overset{?}{\ \gets \ }}
\sample:{\stackrel{{\tiny \$}}{\ \gets\ }}
\GG:{\mathbb{G}}
\FF:{\mathbb{F}}
\language:{\mathcal{L}}
Expand Down
24 changes: 17 additions & 7 deletions book/src/pickles/accumulation.md
Original file line number Diff line number Diff line change
Expand Up @@ -539,22 +539,22 @@ $$
x^{(1)}_k &= x^{(0)}_k + \alpha_1 \cdot x^{(0)}_{n/2+k}\\
x^{(2)}_k &= x^{(1)}_k + \alpha_2 \cdot x^{(2)}_{n/4+k}\\
&= x^{(0)}_k + \alpha_1 \cdot x^{(0)}_{n/2+k} + \alpha_2 \cdot (x^{(0)}_{n/4+k} + \alpha_1 \cdot x^{(0)}_{n/2 + n/4 +k})\\
&= \sum_{i=0}^3 x^{(0)}_{i \cdot \frac{n}{4} + k} \cdot \big( \prod_{j=0}^1 \alpha_j^{b(i,j)} \big)
&= \sum_{i=0}^3 x^{(0)}_{i \cdot \frac{n}{4} + k} \cdot \big( \prod_{j=0}^1 \alpha_j^{b(i,j)} \big)
\end{align*}
$$
Recalling that $x^{(0)}_k = x^k$, it is easy to see that this generalizes exactly to the expression for $h_i$ that we derived later, which concludes that evaluation through $h(X)$ is correct.

Finally, regarding evaluation complexity, it is clear that $\hpoly$ can be evaluated in $O(k = \log \ell)$ time as a product of $k$ factors.
Finally, regarding evaluation complexity, it is clear that $\hpoly$ can be evaluated in $O(k = \log \ell)$ time as a product of $k$ factors.
This concludes the proof.


#### The "Halo Trick"

The "Halo trick" resides in observing that this is also the case for $\vec{G}^{(k)}$:
since it is folded the same way as $\vec{\openx}$. It is not hard to convince one-self (using the same type of argument as above) that:
since it is folded the same way as $\vec{\openx}^{(k)}$. It is not hard to convince one-self (using the same type of argument as above) that:

$$
\vec{G}^{(k)} = \langle \vec{h}, \vec{G} \rangle
G^{(k)} = \langle \vec{h}, \vec{G} \rangle
$$

Where $\vec{h}$ is the coefficients of $h(X)$ (like $\vec{f}$ is the coefficients of $f(X)$), i.e. $h(X) = \sum_{i = 1}^{\ell} h_i X^{i-1}$
Expand Down Expand Up @@ -646,6 +646,13 @@ y &= \sum_i \ \chalfold^{i-1} \cdot h^{(i)}(u) \in \FF \\
C &= \sum_i \ [\chalfold^{i-1}] \cdot U^{(i)} \in \GG
\end{align}
$$
Alternatively:
$$
\begin{align}
y &= \sum_i \ u^{i-1} \cdot h^{(i)}(\chaleval) \in \FF \\
C &= \sum_i \ [u^{i-1}] \cdot U^{(i)} \in \GG
\end{align}
$$

And outputs the following claim:

Expand All @@ -665,7 +672,9 @@ Taking a union bound over all $n$ terms leads to soundness error $\frac{n \ell}{

The reduction above requires $n$ $\GG$ operations and $O(n \log \ell)$ $\FF$ operations.

**Addition of Polynomial Relations:** additional polynomial commitments (i.e. from PlonK) can be added to the randomized sums $(C, y)$ above and opened at $\chaleval$ as well: in which case the prover proves the claimed openings at $\chaleval$ before sampling the challenge $u$.
## Support for Arbitrary Polynomial Relations

Additional polynomial commitments (i.e. from PlonK) can be added to the randomized sums $(C, y)$ above and opened at $\chaleval$ as well: in which case the prover proves the claimed openings at $\chaleval$ before sampling the challenge $u$.
This is done in Kimchi/Pickles: the $\chaleval$ and $u$ above is the same as in the Kimchi code.
The combined $y$ (including both the $h(\cdot)$ evaluations and polynomial commitment openings at $\chaleval$ and $\chaleval \omega$) is called `combined_inner_product` in Kimchi.

Expand All @@ -682,7 +691,8 @@ Cycle of reductions with the added polynomial relations from PlonK.
This $\relation_{\mathsf{PCS},\ell}$ instance reduced back into a single $\relAcc$ instance,
which is included with the proof.

**Multiple Accumulators (the case of PCD):**
## Multiple Accumulators (PCD Case)

From the section above it may seem like there is always going to be a single $\relAcc$ instance,
this is indeed the case if the proof only verifies a <u>single</u> proof, "Incremental Verifiable Computation" (IVC) in the literature.
If the proof verifies <u>multiple</u> proofs, "Proof-Carrying Data" (PCD), then there will be multiple accumulators:
Expand Down Expand Up @@ -718,7 +728,7 @@ Let $\mathcal{C} \subseteq \FF$ be the challenge space (128-bit GLV decomposed c
1. PlonK verifier on $\pi$ outputs polynomial relations (in Purple in Fig. 4).
1. Checking $\relation_{\mathsf{Acc}, \vec{G}}$ and polynomial relations (from PlonK) to $\relation_{\mathsf{PCS},d}$ (the dotted arrows):
1. Sample $\chaleval \sample \mathcal{C}$ (evaluation point) using the Poseidon sponge.
1. Read claimed evaluations at $\chaleval$ and $\omega \chaleval$ (`ProofEvaluations`).
1. Read claimed evaluations at $\chaleval$ and $\omega \chaleval$ (`PointEvaluations`).
1. Sample $\chalu \sample \mathcal{C}$ (commitment combination challenge) using the Poseidon sponge.
1. Sample $\chalv \sample \mathcal{C}$ (evaluation combination challenge) using the Poseidon sponge.
1. Compute $C \in \GG$ with $\chalu$ from:
Expand Down
42 changes: 32 additions & 10 deletions poly-commitment/src/evaluation_proof.rs
Original file line number Diff line number Diff line change
Expand Up @@ -212,6 +212,9 @@ impl<G: CommitmentCurve> SRS<G> {

let (p, blinding_factor) = combine_polys::<G, D>(plnms, polyscale, self.g.len());

// @volhovm: FIXME: this duplicates the definition of rounds
// above. Either it should be removed, or it's a bug and it
// should use the local g, and not self.g.
let rounds = math::ceil_log2(self.g.len());

// b_j = sum_i r^i elm_i^j
Expand Down Expand Up @@ -256,16 +259,21 @@ impl<G: CommitmentCurve> SRS<G> {

for _ in 0..rounds {
let n = g.len() / 2;
let (g_lo, g_hi) = (g[0..n].to_vec(), g[n..].to_vec());
// Pedersen bases
let (g_lo, g_hi) = (&g[0..n], &g[n..]);
// Polynomial coefficients
let (a_lo, a_hi) = (&a[0..n], &a[n..]);
// Evaluation points
let (b_lo, b_hi) = (&b[0..n], &b[n..]);

// Blinders for L/R
let rand_l = <G::ScalarField as UniformRand>::rand(rng);
let rand_r = <G::ScalarField as UniformRand>::rand(rng);

// Pedersen commitment to a_lo,rand_l,<a_hi,b_lo>
let l = VariableBaseMSM::multi_scalar_mul(
&[&g[0..n], &[self.h, u]].concat(),
&[&a[n..], &[rand_l, inner_prod(a_hi, b_lo)]]
&[g_lo, &[self.h, u]].concat(),
&[a_hi, &[rand_l, inner_prod(a_hi, b_lo)]]
.concat()
.iter()
.map(|x| x.into_repr())
Expand All @@ -274,8 +282,8 @@ impl<G: CommitmentCurve> SRS<G> {
.into_affine();

let r = VariableBaseMSM::multi_scalar_mul(
&[&g[n..], &[self.h, u]].concat(),
&[&a[0..n], &[rand_r, inner_prod(a_lo, b_hi)]]
&[g_hi, &[self.h, u]].concat(),
&[a_lo, &[rand_r, inner_prod(a_lo, b_hi)]]
.concat()
.iter()
.map(|x| x.into_repr())
Expand All @@ -289,13 +297,15 @@ impl<G: CommitmentCurve> SRS<G> {
sponge.absorb_g(&[l]);
sponge.absorb_g(&[r]);

// Round #i challenges
let u_pre = squeeze_prechallenge(&mut sponge);
let u = u_pre.to_field(&endo_r);
let u_inv = u.inverse().unwrap();

chals.push(u);
chal_invs.push(u_inv);

// Folding polynomial coefficients
a = a_hi
.par_iter()
.zip(a_lo)
Expand All @@ -308,6 +318,7 @@ impl<G: CommitmentCurve> SRS<G> {
})
.collect();

// Folding evaluation points
b = b_lo
.par_iter()
.zip(b_hi)
Expand All @@ -320,23 +331,33 @@ impl<G: CommitmentCurve> SRS<G> {
})
.collect();

g = G::combine_one_endo(endo_r, endo_q, &g_lo, &g_hi, u_pre);
// Folding bases
g = G::combine_one_endo(endo_r, endo_q, g_lo, g_hi, u_pre);
}

assert!(g.len() == 1);
assert!(
g.len() == 1 && a.len() == 1 && b.len() == 1,
"Commitment folding must produce single elements after log rounds"
);
let a0 = a[0];
let b0 = b[0];
let g0 = g[0];

// Schnorr/Sigma-protocol part

// r_prime = blinding_factor + \sum_i (rand_l[i] * (u[i]^{-1}) + rand_r * u[i])
// where u is a vector of folding challenges, and rand_l/rand_r are
// intermediate L/R blinders
let r_prime = blinders
.iter()
.zip(chals.iter().zip(chal_invs.iter()))
.map(|((l, r), (u, u_inv))| ((*l) * u_inv) + (*r * u))
.map(|((rand_l, rand_r), (u, u_inv))| ((*rand_l) * u_inv) + (*rand_r * u))
.fold(blinding_factor, |acc, x| acc + x);

let d = <G::ScalarField as UniformRand>::rand(rng);
let r_delta = <G::ScalarField as UniformRand>::rand(rng);

// delta = (g0 + u*b0)*d + h*r_delta
let delta = ((g0.into_projective() + (u.mul(b0))).into_affine().mul(d)
+ self.h.mul(r_delta))
.into_affine();
Expand All @@ -345,7 +366,7 @@ impl<G: CommitmentCurve> SRS<G> {
let c = ScalarChallenge(sponge.challenge()).to_field(&endo_r);

let z1 = a0 * c + d;
let z2 = c * r_prime + r_delta;
let z2 = r_prime * c + r_delta;

OpeningProof {
delta,
Expand Down Expand Up @@ -410,7 +431,7 @@ impl<G: CommitmentCurve> SRS<G> {
#[derive(Clone, Debug, Serialize, Deserialize, Default)]
#[serde(bound = "G: ark_serialize::CanonicalDeserialize + ark_serialize::CanonicalSerialize")]
pub struct OpeningProof<G: AffineCurve> {
/// vector of rounds of L & R commitments
/// Vector of rounds of L & R commitments
#[serde_as(as = "Vec<(o1_utils::serialization::SerdeAs, o1_utils::serialization::SerdeAs)>")]
pub lr: Vec<(G, G)>,
#[serde_as(as = "o1_utils::serialization::SerdeAs")]
Expand All @@ -419,6 +440,7 @@ pub struct OpeningProof<G: AffineCurve> {
pub z1: G::ScalarField,
#[serde_as(as = "o1_utils::serialization::SerdeAs")]
pub z2: G::ScalarField,
/// A final folded commitment base
#[serde_as(as = "o1_utils::serialization::SerdeAs")]
pub sg: G,
}
Expand Down

0 comments on commit ca263f8

Please sign in to comment.