Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge develop -> main #1621

Closed
wants to merge 78 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
78 commits
Select commit Hold shift + click to select a range
1990bf5
Capture zk_rows as a variable in the expression framework
mrmr1993 Jun 1, 2023
a85f2cd
cargo fmt
mrmr1993 Jun 6, 2023
4f1a904
Add link to GitHub
dannywillems Aug 30, 2023
5e91825
This TODO is done
dannywillems Aug 30, 2023
9be01f7
Remove snarky
dannywillems Sep 29, 2023
99b35be
Add helpers for inferring feature flags
mrmr1993 Oct 9, 2023
33252de
Derive binding types for FeatureFlags
mrmr1993 Oct 9, 2023
6cac18b
Improve errors around domain creation
mrmr1993 Oct 10, 2023
0467bce
Fixup edge-case around chunking in constraint system
mrmr1993 Oct 10, 2023
51d7da6
Merge pull request #1276 from o1-labs/feature/better-domain-errors
Oct 10, 2023
6280cfa
Merge branch 'develop' into dannywillems/remove-snarky-section
dannywillems Oct 10, 2023
a6782ca
Merge pull request #1277 from o1-labs/feature/fix-edge-case-around-ch…
Oct 10, 2023
b270e38
Merge pull request #1259 from o1-labs/dannywillems/remove-snarky-section
dannywillems Oct 10, 2023
1adc8a7
Merge branch 'feature/num-zk-rows-variable' into feature/num-zk-rows-…
mrmr1993 Oct 10, 2023
b1bc089
Clippy :')
mrmr1993 Oct 10, 2023
488ad18
[#14070] Add a unit check for the table id zero value condition
volhovm Oct 2, 2023
74b5b0c
[#14070] Fix the table dummy rows bug
volhovm Oct 12, 2023
eaf3184
[#14097,#14070] Move lookup table consistency checks into LookupTable…
volhovm Oct 12, 2023
6ec48aa
More opinionated clippy changes
mrmr1993 Oct 12, 2023
21dfdf6
Merge branch 'develop' into feature/num-zk-rows-variable2
mrmr1993 Oct 12, 2023
a2e577c
Fixup compilation after clippy's 'help'
mrmr1993 Oct 12, 2023
fe78e35
Fixup test
mrmr1993 Oct 12, 2023
09ba6e2
cargo fmt
mrmr1993 Oct 14, 2023
88bf986
Merge pull request #1279 from o1-labs/feature/num-zk-rows-variable2
mrmr1993 Oct 14, 2023
2cf2ea4
Cosmetics on the lookup RFC
volhovm Oct 21, 2023
c5d1459
Capitalize book summary titles
volhovm Oct 23, 2023
d71b06b
Add .ignore file to .gitignore
volhovm Oct 23, 2023
51e2672
Improve RFC3, fix rendering in kimchi/overview.md
volhovm Oct 23, 2023
f38a2c3
Cargo: use wasm-bindgen 0.2.87
rbonichon Oct 26, 2023
57a3d8b
Merge pull request #1302 from o1-labs/rb/wasm-bindgen-0.2.87
dannywillems Oct 26, 2023
ab79316
Merge pull request #1292 from o1-labs/volhovm/rfc3-cosmetics
dannywillems Oct 30, 2023
81f1296
Initially move sections around [MinaProtocol/mina#14442]
volhovm Oct 31, 2023
992448e
Bump up actions/@checkout to 4.1.1
dannywillems Nov 1, 2023
b66e145
Merge pull request #1316 from o1-labs/dw/update-actions-to-4.1.1-develop
dannywillems Nov 1, 2023
251ab0c
[#14097,#14070] Enforce using constructor through private fields
volhovm Oct 16, 2023
cb46062
Formatting
volhovm Oct 16, 2023
e3aceaf
[#14097,#14070] Fix lookup test creating invalid id=0 zero table
volhovm Oct 16, 2023
1fac311
Add table id collision assertions to LookupConstraintSystem#create
volhovm Oct 24, 2023
d0e5de6
Improve lookup collision checks, fixed broken tests
volhovm Oct 30, 2023
64e9155
Add a test for lookup table id collisions
volhovm Oct 30, 2023
2121c7f
Address PR comments [MinaProtocol/mina#14070]
volhovm Nov 2, 2023
d79015d
Revert HashSet<LookupTable<_>> changes
volhovm Nov 2, 2023
2bcd3db
Consistency pass on book chapters / styling
volhovm Nov 2, 2023
73ae00a
Prettify sections more
volhovm Nov 7, 2023
ab36dce
Merge pull request #1275 from o1-labs/feature/infer-features-from-gates
dannywillems Nov 9, 2023
e9250aa
Merge branch 'develop' into misc/link-todo-with-gh-issues
dannywillems Nov 14, 2023
a285216
Add TODO link
dannywillems Nov 14, 2023
da65a25
Deduplicate lookup tables
mrmr1993 Nov 16, 2023
88c1776
Merge pull request #1334 from o1-labs/feature/deduplicate-lookup-table
dannywillems Nov 16, 2023
c8ebc93
Merge branch 'develop' into volhovm/mina14070-table-id-zero-row
volhovm Nov 16, 2023
7f6eba0
Merge pull request #1263 from o1-labs/volhovm/mina14070-table-id-zero…
dannywillems Nov 21, 2023
2f94807
Merge pull request #1304 from o1-labs/volhovm/mina14442-move-old-rfcs…
volhovm Nov 21, 2023
8920a01
Bare URL's are not supported and must be wrapped.
dannywillems Nov 27, 2023
25156a2
Update specifications
dannywillems Nov 27, 2023
901033a
Merge pull request #1200 from o1-labs/misc/link-todo-with-gh-issues
dannywillems Nov 28, 2023
6c6adca
Fix resolver warnings
dannywillems Nov 29, 2023
3908626
Merge pull request #1348 from o1-labs/dw/fix-resolver-warnings-develop
Nov 29, 2023
1c30635
Additional comment on the table_id_combiner
dannywillems Nov 29, 2023
bde1c05
Merge pull request #1355 from o1-labs/dw/additional-comments-on-combi…
dannywillems Nov 30, 2023
267105a
Revert "Merge pull request #1263 from o1-labs/volhovm/mina14070-table…
mrmr1993 Dec 4, 2023
5685f41
Merge pull request #1378 from o1-labs/feature/revert-bad-par
mrmr1993 Dec 4, 2023
8e0c0af
Activate feature std_rng
dannywillems Dec 6, 2023
cbaa107
Generate seed, print in case of errors and initialize a RNG with
dannywillems Dec 6, 2023
86df71f
Merge pull request #1442 from o1-labs/dw/use-seedable-rng-for-some-tests
dannywillems Dec 6, 2023
3200191
Constraint: add explanation on the + 1
dannywillems Dec 6, 2023
94f2d22
Merge pull request #1446 from o1-labs/dw/add-comment-constraint
dannywillems Dec 6, 2023
4597308
Add test checking that the zero dummy value is always present
dannywillems Dec 6, 2023
f1b5691
Update kimchi/src/tests/lookup.rs
dannywillems Dec 6, 2023
dfb2b67
Merge pull request #1445 from o1-labs/dw/add-check-dummy-value-lookup
mrmr1993 Dec 6, 2023
8b97256
Fix 1362: add the dummy entry in the domain size count
dannywillems Dec 6, 2023
ac8fc43
Merge pull request #1457 from o1-labs/dw/fix-1362
dannywillems Dec 7, 2023
c2d289f
Lookup: a table with ID 0 *must* contain a row with 0 only
dannywillems Dec 6, 2023
272504d
Merge pull request #1454 from o1-labs/dw/add-test-when-table-with-zer…
dannywillems Dec 7, 2023
de0c54c
Lookup: add more documentation regarding the dummy value
dannywillems Dec 6, 2023
73410f2
Merge pull request #1455 from o1-labs/dw/doc-lookup-dummy-value-explain
dannywillems Dec 7, 2023
83148db
Test: number of gates should be at least 2.
dannywillems Dec 7, 2023
be69da0
Merge pull request #1510 from o1-labs/dw/fix-nb-gates-develop
dannywillems Dec 11, 2023
719f856
Merge branch 'develop' into o1js-main
mitschabaude Dec 19, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/benches.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ jobs:
if: github.event.label.name == 'benchmark'
steps:
- name: Checkout PR
uses: actions/checkout@v2
uses: actions/checkout@v4.1.1

# as action-rs does not seem to be maintained anymore, building from
# scratch the environment using rustup
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/coverage.yml.disabled
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ jobs:
timeout-minutes: 60
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2.3.4
- uses: actions/checkout@v4.1.1
with:
persist-credentials: false

Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/gh-page.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ jobs:

steps:
- name: Checkout Repository
uses: actions/checkout@v2
uses: actions/checkout@v4.1.1

# as action-rs does not seem to be maintained anymore, building from
# scratch the environment using rustup
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/rust.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ jobs:
name: Run some basic checks and tests
steps:
- name: Checkout PR
uses: actions/checkout@v3
uses: actions/checkout@v4.1.1

# as action-rs does not seem to be maintained anymore, building from
# scratch the environment using rustup
Expand Down
4 changes: 3 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -25,4 +25,6 @@ _build

*.html
# If symlink created for kimchi-visu
tools/srs
tools/srs

.ignore
1 change: 1 addition & 0 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ members = [
"utils",
"internal-tracing",
]
resolver = "2"

[profile.release]
lto = true
Expand Down
73 changes: 26 additions & 47 deletions book/src/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,15 +9,15 @@
- [Rings](./fundamentals/zkbook_rings.md)
- [Fields](./fundamentals/zkbook.md)
- [Polynomials](./fundamentals/zkbook_polynomials.md)
- [Multiplying polynomials](./fundamentals/zkbook_multiplying_polynomials.md)
- [Fast Fourier transform](./fundamentals/zkbook_fft.md)
- [Multiplying Polynomials](./fundamentals/zkbook_multiplying_polynomials.md)
- [Fast Fourier Transform](./fundamentals/zkbook_fft.md)

# Cryptographic tools
# Cryptographic Tools

- [Commitments](./fundamentals/zkbook_commitment.md)
- [Polynomial commitments](./plonk/polynomial_commitments.md)
- [Inner product argument](./plonk/inner_product.md)
- [Different functionnalities](./plonk/inner_product_api.md)
- [Polynomial Commitments](./plonk/polynomial_commitments.md)
- [Inner Product Argument](./plonk/inner_product.md)
- [Different Functionnalities](./plonk/inner_product_api.md)
- [Two Party Computation](./fundamentals/zkbook_2pc/overview.md)
- [Garbled Circuits](./fundamentals/zkbook_2pc/gc.md)
- [Basics](./fundamentals/zkbook_2pc/basics.md)
Expand All @@ -27,71 +27,50 @@
- [Half Gate](./fundamentals/zkbook_2pc/halfgate.md)
- [Full Description](./fundamentals/zkbook_2pc/fulldesc.md)
- [Fixed-Key-AES Hashes](./fundamentals/zkbook_2pc/fkaes.md)

- [Oblivious Transfer](./fundamentals/zkbook_2pc/ot.md)
- [Base OT](./fundamentals/zkbook_2pc/baseot.md)
- [OT Extension](./fundamentals/zkbook_2pc/ote.md)

- [Full Protocol](./fundamentals/zkbook_2pc/2pc.md)

# Proof systems

- [Overview](./fundamentals/proof_systems.md)
- [zk-SNARKs](./fundamentals/zkbook_plonk.md)
- [Custom constraints](./fundamentals/custom_constraints.md)
- [Proof Systems](./fundamentals/proof_systems.md)
- [zk-SNARKs](./fundamentals/zkbook_plonk.md)

# Background on PLONK

- [Overview](./plonk/overview.md)
- [Glossary](./plonk/glossary.md)
- [Glossary](./plonk/glossary.md)
- [Domain](./plonk/domain.md)
- [Lagrange basis in multiplicative subgroups](./plonk/lagrange.md)
- [Non-interaction with fiat-shamir](./plonk/fiat_shamir.md)
- [Lagrange Basis in Multiplicative Subgroups](./plonk/lagrange.md)
- [Non-Interactivity via Fiat-Shamir](./plonk/fiat_shamir.md)
- [Plookup](./plonk/plookup.md)
- [Maller's optimization](./plonk/maller.md)
- [Maller's Optimization](./plonk/maller.md)
- [Zero-Column Approach to Zero-Knowledge](./plonk/zkpm.md)

# Kimchi

- [Overview](./kimchi/overview.md)
- [Arguments](./kimchi/arguments.md)
- [Custom gates](./kimchi/gates.md)
- [Permutation](./kimchi/permut.md)
- [Lookup](./kimchi/lookup.md)

# Snarky
- [Arguments](./kimchi/arguments.md)
- [Final Check](./kimchi/final_check.md)
- [Maller's Optimization for Kimchi](./kimchi/maller_15.md)
- [Lookup Tables](./kimchi/lookup.md)
- [Extended Lookup Tables](./kimchi/extended-lookup-tables.md)
- [Custom Constraints](./kimchi/custom_constraints.md)
- [Custom Gates](./kimchi/gates.md)
- [Foreign Field Addition](./kimchi/foreign_field_add.md)
- [Foreign Field Multiplication](./kimchi/foreign_field_mul.md)
- [Keccak](./kimchi/keccak.md)

- [Overview](./snarky/overview.md)
- [API](./snarky/api.md)
- [snarky wrapper](./snarky/snarky-wrapper.md)
- [Kimchi backend](./snarky/kimchi-backend.md)
- [Vars](./snarky/vars.md)
- [Booleans](./snarky/booleans.md)
- [Circuit generation](./snarky/circuit-generation.md)
- [Witness generation](./snarky/witness-generation.md)

# Pickles & Inductive Proof Systems

- [Overview](./fundamentals/zkbook_ips.md)
- [Accumulation](./pickles/accumulation.md)
- [Deferred Computation](./pickles/deferred.md)
- [Passthough & Me-Only](./pickles/passthrough.md)

# RFCs

- [RFC 0: Alternative zero-knowledge](./plonk/zkpm.md)
- [RFC 1: Final check](./plonk/final_check.md)
- [RFC 2: Maller's optimization for kimchi](./plonk/maller_15.md)
- [RFC 3: Plookup integration in kimchi](./rfcs/3-lookup.md)
- [RFC 4: Extended lookup tables](./rfcs/extended-lookup-tables.md)
- [RFC 5: Foreign Field Addition](./rfcs/foreign_field_add.md)
- [RFC 6: Foreign Field Multiplication](./rfcs/foreign_field_mul.md)
- [RFC 7: Keccak](./rfcs/keccak.md)

# Specifications
# Technical Specifications

- [Poseidon hash](./specs/poseidon.md)
- [Polynomial commitment](./specs/poly-commitment.md)
- [Pasta curves](./specs/pasta.md)
- [Polynomial Commitment](./specs/poly-commitment.md)
- [Pasta Curves](./specs/pasta.md)
- [Kimchi](./specs/kimchi.md)
- [Universal Reference String (URS)](./specs/urs.md)
- [Pickles](./specs/pickles.md)
Expand Down
59 changes: 1 addition & 58 deletions book/src/fundamentals/custom_constraints.md
Original file line number Diff line number Diff line change
@@ -1,58 +1 @@
This section explains how to design and add a custom constraint to our `proof-systems` library.

PLONK is an AIOP. That is, it is a protocol in which the prover sends polynomials as messages and the verifier sends random challenges, and then evaluates the prover's polynomials and performs some final checks on the outputs.

PLONK is very flexible. It can be customized with constraints specific to computations of interest. For example, in Mina, we use a PLONK configuration called kimchi that has custom constraints for poseidon hashing, doing elliptic curve operations, and more.

A "PLONK configuration" specifies
- The set of types of constraints that you would like to be able to enforce. We will describe below how these types of constraints are specified.
- A number of "eq-able" columns `W`
- A number of "advice" columns `A`

Under such configuration, a circuit is specified by
- A number of rows `n`
- A vector `cs` of constraint-types of length `n`. I.e., a vector that specifies, for each row, which types of constraints should be enforced on that row.
- A vector `eqs : Vec<(Position, Position)>` of equalities to enforce, where `struct Position { row: usize, column: usize }`. E.g., if the pair `(Position { row: 0, col: 8 }, Position { row: 10, col: 2 })` is in `eqs`, then the circuit is saying the entries in those two positions should be equal, or in other words that they refer to the same value. This is where the distinction between "eq-able" and "advice" columns comes in. The `column` field of a position in the `eqs` array can only refer to one of the first `W` columns. Equalities cannot be enforced on entries in the `A` columns after that.

Then, given such a circuit, PLONK lets you produce proofs for the statement

> I know `W + A` "column vectors" of field elements `vs: [Vec<F>; W + A]` such that for each row index `i < n`, the constraint of type `cs[i]` holds on the values `[vs[0][i], ..., vs[W+A - 1][i], vs[0][i+1], ..., vs[W+A - 1][i+1]` and all the equalities in `eqs` hold. I.e., for `(p1, p2)` in `eqs` we have `vs[p1.col][p1.row] == vs[p2.col][p2.row]`. So, a constraint can check the values in two adjacent rows.

## Specifying a constraint

Mathematically speaking, a constraint is a multivariate polynomial over the variables $c_{\mathsf{Curr},i}, \dots, v_{\mathsf{Curr}, W+A-1}, v_{\mathsf{Next}, 0}, \dots, v_{\mathsf{Next}, W+A-1}$. In other words, there is one variable corresponding to the value of each column in the "current row" and one variable correspond to the value of each column in the "next row".

In Rust, $v_{r, i}$ is written `E::cell(Column::Witness(i), r)`. So, for example, the variable $v_{\mathsf{Next}, 3}$ is written
`E::cell(Column::Witness(3), CurrOrNext::Next)`.



let w = |i| v(Column::Witness(i));
Let's

## Defining a PLONK configuration

The art in proof systems comes from knowing how to design a PLONK configuration to ensure maximal efficiency for the sorts of computations you are trying to prove. That is, how to choose the numbers of columns `W` and `A`, and how to define the set of constraint types.

Let's describe the trade-offs involved here.

The majority of the proving time for the PLONK prover is in
- committing to the `W + A` column polynomials, which have length equal to the number of rows `n`
- committing to the "permutation accumulator polynomial, which has length `n`.
- committing to the quotient polynomial, which reduces to computing `max(k, W)` MSMs of size `n`, where `k` is the max degree of a constraint.
- performing the commitment opening proof, which is mostly dependent on the number of rows `n`.

So all in all, the proving time is approximately equal to the time to perform `W + A + 1 + max(k - 1, W)` MSMs of size `n`, plus the cost of an opening proof for polynomials of degree `n - 1`.

and maybe
- computing the combined constraint polynomial, which has degree `k * n` where `k` is the maximum degree of a constraint

- Increasing `W` and `A` increase proof size, and they potentially impact the prover-time as the prover must compute polynomial commitments to each column, and computing a polynomial commitment corresponds to doing one MSM (multi-scalar multiplication, also called a multi-exponentiation.)

However, often increasing the number of columns allows you to decrease the number of rows required for a given computation. For example, if you can perform one Poseidon hash in 36 rows with 5 total columns, then you can also perform it in 12 (= 36 / 3) rows with 15 (= 5 * 3) total columns.

**Decreasing the number of rows (even while keeping the total number of table entries the same) is desirable because it reduces the cost of the polynomial commitment opening proof, which is dominated by a factor linear in the number of rows, and barely depends on the number of columns.**

Increasing the number of columns also increases verifier time, as the verifier must perform one scalar-multiplication and one hash per column. Proof length is also affected by a larger number of columns, as more polynomials need to be committed and sent along to the verifier.

There is typically some interplay between these
# Custom constraints
24 changes: 11 additions & 13 deletions book/src/fundamentals/proof_systems.md
Original file line number Diff line number Diff line change
@@ -1,31 +1,29 @@
# Overview
# Proof Systems Design Overview

Many modern proof systems (and I think all that are in use) are constructed according to the following recipe.

1. You start out with a class of computations.

2. You devise a way to *arithmetize* those computations. That is, to express your computation as a statement about polynomials.

More specifically, you describe what is often called an "algebraic interactive oracle proof" (AIOP) that encodes your computation. An AIOP is a protocol describing an interaction between a prover and a verifier, in which the prover sends the verifier some "polynomial oracles" (basically a black box function that given a point evaluates a polynomial at that point), the verifier sends the prover random challenges, and at the end, the verifier queries the prover's polynomials at points of its choosing and makes a decision as to whether it has been satisfied by the proof.

3. An AIOP is an imagined interaction between parties. It is an abstract description of the protocol that will be "compiled" into a SNARK. There are several "non-realistic" aspects about it. One is that the prover sends the verifier black-box polynomials that the verifier can evaluate. These polynomials have degree comparable to the size of the computation being verified. If we implemented these "polynomial oracles" by having the prover really send the $O(n)$ size polynomials (say by sending all their coefficients), then we would not have a zk-SNARK at all, since the verifier would have to read this linearly sized polynomial so we would lose succinctness, and the polynomials would not be black-box functions, so we may lose zero-knowledge.

Instead, when we concretely instantiate the AIOP, we have the prover send constant-sized, hiding *polynomial commitments*. Then, in the phase of the AIOP where the verifier queries the polynomials, the prover sends an *opening proof* for the polynomial commitments which the verifier can check, thus simulating the activity of evaluating the prover's polynomials on your own.

So this is the next step of making a SNARK: instantiating the AIOP with a polynomial commitment scheme of one's choosing. There are several choices here and these affect the properties of the SNARK you are constructing, as the SNARK will inherit efficiency and setup properties of the polynomial commitment scheme used.

4. An AIOP describes an interactive protocol between the verifier and the prover. In reality, typically, we also want our proofs to be non-interactive.
4. An AIOP describes an interactive protocol between the verifier and the prover. In reality, typically, we also want our proofs to be non-interactive.

This is accomplished by what is called the [Fiat--Shamir transformation](). The basic idea is this: all that the verifier is doing is sampling random values to send to the prover. Instead, to generate a "random" value, the prover simulates the verifier by hashing its messages. The resulting hash is used as the "random" challenge.

At this point we have a fully non-interactive proof. Let's review our steps.

1. Start with a computation.

2. Translate the computation into a statement about polynomials and design a corresponding AIOP.

3. Compile the AIOP into an interactive protocol by having the prover send hiding polynomial commitments instead of polynomial oracles.

4. Get rid of the verifier-interaction by replacing it with a hash function. I.e., apply the Fiat--Shamir transform.

3. Compile the AIOP into an interactive protocol by having the prover send hiding polynomial commitments instead of polynomial oracles.

4. Get rid of the verifier-interaction by replacing it with a hash function. I.e., apply the Fiat--Shamir transform.
58 changes: 58 additions & 0 deletions book/src/kimchi/custom_constraints.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
This section explains how to design and add a custom constraint to our `proof-systems` library.

PLONK is an AIOP. That is, it is a protocol in which the prover sends polynomials as messages and the verifier sends random challenges, and then evaluates the prover's polynomials and performs some final checks on the outputs.

PLONK is very flexible. It can be customized with constraints specific to computations of interest. For example, in Mina, we use a PLONK configuration called kimchi that has custom constraints for poseidon hashing, doing elliptic curve operations, and more.

A "PLONK configuration" specifies
- The set of types of constraints that you would like to be able to enforce. We will describe below how these types of constraints are specified.
- A number of "eq-able" columns `W`
- A number of "advice" columns `A`

Under such configuration, a circuit is specified by
- A number of rows `n`
- A vector `cs` of constraint-types of length `n`. I.e., a vector that specifies, for each row, which types of constraints should be enforced on that row.
- A vector `eqs : Vec<(Position, Position)>` of equalities to enforce, where `struct Position { row: usize, column: usize }`. E.g., if the pair `(Position { row: 0, col: 8 }, Position { row: 10, col: 2 })` is in `eqs`, then the circuit is saying the entries in those two positions should be equal, or in other words that they refer to the same value. This is where the distinction between "eq-able" and "advice" columns comes in. The `column` field of a position in the `eqs` array can only refer to one of the first `W` columns. Equalities cannot be enforced on entries in the `A` columns after that.

Then, given such a circuit, PLONK lets you produce proofs for the statement

> I know `W + A` "column vectors" of field elements `vs: [Vec<F>; W + A]` such that for each row index `i < n`, the constraint of type `cs[i]` holds on the values `[vs[0][i], ..., vs[W+A - 1][i], vs[0][i+1], ..., vs[W+A - 1][i+1]` and all the equalities in `eqs` hold. I.e., for `(p1, p2)` in `eqs` we have `vs[p1.col][p1.row] == vs[p2.col][p2.row]`. So, a constraint can check the values in two adjacent rows.

## Specifying a constraint

Mathematically speaking, a constraint is a multivariate polynomial over the variables $c_{\mathsf{Curr},i}, \dots, v_{\mathsf{Curr}, W+A-1}, v_{\mathsf{Next}, 0}, \dots, v_{\mathsf{Next}, W+A-1}$. In other words, there is one variable corresponding to the value of each column in the "current row" and one variable correspond to the value of each column in the "next row".

In Rust, $v_{r, i}$ is written `E::cell(Column::Witness(i), r)`. So, for example, the variable $v_{\mathsf{Next}, 3}$ is written
`E::cell(Column::Witness(3), CurrOrNext::Next)`.



let w = |i| v(Column::Witness(i));
Let's

## Defining a PLONK configuration

The art in proof systems comes from knowing how to design a PLONK configuration to ensure maximal efficiency for the sorts of computations you are trying to prove. That is, how to choose the numbers of columns `W` and `A`, and how to define the set of constraint types.

Let's describe the trade-offs involved here.

The majority of the proving time for the PLONK prover is in
- committing to the `W + A` column polynomials, which have length equal to the number of rows `n`
- committing to the "permutation accumulator polynomial, which has length `n`.
- committing to the quotient polynomial, which reduces to computing `max(k, W)` MSMs of size `n`, where `k` is the max degree of a constraint.
- performing the commitment opening proof, which is mostly dependent on the number of rows `n`.

So all in all, the proving time is approximately equal to the time to perform `W + A + 1 + max(k - 1, W)` MSMs of size `n`, plus the cost of an opening proof for polynomials of degree `n - 1`.

and maybe
- computing the combined constraint polynomial, which has degree `k * n` where `k` is the maximum degree of a constraint

- Increasing `W` and `A` increase proof size, and they potentially impact the prover-time as the prover must compute polynomial commitments to each column, and computing a polynomial commitment corresponds to doing one MSM (multi-scalar multiplication, also called a multi-exponentiation.)

However, often increasing the number of columns allows you to decrease the number of rows required for a given computation. For example, if you can perform one Poseidon hash in 36 rows with 5 total columns, then you can also perform it in 12 (= 36 / 3) rows with 15 (= 5 * 3) total columns.

**Decreasing the number of rows (even while keeping the total number of table entries the same) is desirable because it reduces the cost of the polynomial commitment opening proof, which is dominated by a factor linear in the number of rows, and barely depends on the number of columns.**

Increasing the number of columns also increases verifier time, as the verifier must perform one scalar-multiplication and one hash per column. Proof length is also affected by a larger number of columns, as more polynomials need to be committed and sent along to the verifier.

There is typically some interplay between these
Loading