Detailed explanation of the two SNARK tools recently launched by a16z crypto

avatar
Jessica
1 years ago
This article is approximately 2749 words,and reading the entire article takes about 4 minutes
To minimize the commitment cost of verifiers and expand the territory of ZK-based Layer 2 with high performance and auditability.

This article is from a 16 z, original author: Justin Thaler, translated by Odaily translator Jessica.

Detailed explanation of the two SNARK tools recently launched by a16z crypto

On August 10th, a 16 z crypto launched a new tool based on SNARK called Lasso and Jolt. Together, they represent a new design approach for SNARK that can improve the performance of widely deployed toolchains by an order of magnitude or more; provide a better and more convenient developer experience; and make auditing easier.

SNARK (Succinct Non-Interactive Argument of Knowledge) is a cryptographic protocol that allows anyone to prove to an untrusted verifier that they know a "witness" satisfying certain properties.

  • A prominent use case in Web 3 is L2 proving to L1 that they know the digital signatures authorizing a series of transactions, so the signatures themselves do not have to be stored and verified by the L1 network, thus improving scalability.

  • Applications outside of blockchain include: verifying the validity of all outputs generated by untrusted hardware devices to ensure users can trust them. Individuals can prove, in a zero-knowledge manner, that trusted authorities have issued them credentials proving their age is sufficient to access age-restricted content, without revealing their birth date. Anyone sending encrypted data over a network can prove to administrators that the data complies with network policies, without revealing further details.

While many SNARKs remain cost-attractive for verifiers, they typically introduce an overhead of about six orders of magnitude in the prover's computation. The additional work for provers is highly parallelizable, but the millions-fold overhead severely limits the applicability of SNARKs.

A more performant SNARK will accelerate L2 and enable builders to unlock unforeseen applications. This is why we introduce two new related technologies: Lasso, a new lookup parameters method that significantly improves prover costs; and Jolt, which uses Lasso to provide a new framework for designing SNARKs for the so-called zkVM and broader front-end designs. Together, they improve the performance, developer experience, and verifiability of SNARK designs, thereby enhancing building in Web 3.0.

Our initial implementation of Lasso has already shown a speedup of over 10x compared to halo2, a popular SNARK toolkit for lookup parameters. With further optimization of the Lasso codebase, we expect the speed to increase by about 40x. Jolt includes additional innovations on top of Lasso, and we expect it to achieve similar or better acceleration compared to existing zkVM.

Lookup Parameters, Lasso, and Jolt

A SNARK front-end is a compiler that converts a computer program into circuits that can be ingested by the SNARK backend. (Note: Circuits are an extremely limited computational model where the only available "primitive operations" are addition and multiplication.)

A critical tool in modern SNARK designs is a protocol called lookup parameters, which allows untrusted provers to securely submit a large vector and then prove that each entry of the vector is contained in some predetermined table. Lookup parameters help maintain circuit efficiency by efficiently handling operations that are not naturally computed by a small number of additions and multiplications.

Small.

However, as Barry Whitehat of the Ethereum Foundation said last year, "If we can effectively define circuits using only lookup arguments, it will lead to simpler tools and circuits". The circuits we design perform only lookups. Over time, lookup arguments "will be more efficient than polynomial constraints for almost everything". 

This vision contrasts sharply with the current way of working, where developers deploy SNARK by either using a special domain-specific language (compiling the program into polynomial constraints) or encoding the constraints manually. This toolchain requires a lot of work and provides a large surface area for the spread of security-critical errors. Even with complex tools and circuits, the performance of SNARK remains slow, limiting their applicability.   

Lasso and Jolt solve all three key issues: performance, developer experience, and verifiability, realizing the vision of lookup singularities. Lasso and Jolt also force a rethinking of many accepted wisdoms in SNARK design.

After providing the necessary background, the following revisits some common perspectives on SNARK performance and explains how to optimize them based on innovations like Lasso and Jolt.

SNARK Design Background: Why So Slow?

Most SNARK backends require the prover to use a polynomial commitment scheme to cryptographically commit to the values of each gate in the circuit. The prover then proves that the submitted values do indeed correspond to the correct execution of the witness-checking program.  

I will refer to the prover's work from the polynomial commitment scheme as the commitment cost. SNARK has additional proof costs beyond the polynomial commitment scheme. However, the commitment cost is often the bottleneck. This is also the case with Lasso and Jolt. While designing a SNARK where the commitment cost isn't the primary prover cost does not mean that the polynomial commitment scheme is cheap. Conversely, it means that other costs are higher than they should be.  

Intuitively, the purpose of commitment is to cryptographically increase the conciseness of a proof system. When the prover submits a large value vector, it's as if the prover is sending the entire vector to the verifier, similar to how regular proof systems send the entire witness to the verifier.

The same. The commitment scheme can achieve this goal without forcing the verifier to actually receive the entire witness -- this means that the purpose of the commitment scheme in SNARK design is to control the cost for verifiers.

However, these cryptographic methods are very expensive for provers, especially compared to the information-theoretic methods used by SNARK outside of the polynomial commitment scheme. Information-theoretic methods rely only on finite field operations. And a field operation is several orders of magnitude faster than committing any field element.

Depending on the polynomial commitment scheme used, computing commitments involves multiple exponentiations (also known as multi-scalar multiplication, or MSM) or FFT and Merkle hashing. Lasso and Jolt can use any polynomial commitment scheme, but they have particularly attractive costs when instantiated with MSM-based schemes such as IPA/Bulletproofs, KZG/PST, Hyrax, Dory, or Zeromorph.

Why Lasso and Jolt are important

Lasso is a new lookup parameter where provers commit to fewer and smaller values than previous works. This can lead to 20 times or more acceleration, with 2 to 4 times acceleration coming from fewer committed values, and another 10 times acceleration from the fact that all committed values in Lasso are small. Unlike many previous lookup parameters, Lasso (and Jolt) also avoids FFT, which takes up a lot of space and can become a bottleneck for large instances.

Furthermore, Lasso even works for huge tables as long as those tables are "structured" in a precise technical sense. These tables are too large to be explicitly realized by anyone, but Lasso only pays for the table elements that are actually accessed. In particular -- compared to previous lookup parameters -- if the table is structured, no party needs to commit to all values in the table in an encrypted manner.

Lasso leverages two different notions of structure: decomposability and LDE structure. (LDE is an abbreviation for Low Degree Extension polynomials, a technical concept.) Decomposability roughly means that a single lookup in a table can be answered by performing a small number of lookups in smaller tables. This is a stricter requirement than LDE structure, but Lasso is particularly effective when applied to decomposable tables.

For every RISC-V instruction fi, the main idea of Jolt is to create a lookup table that contains the entire evaluation table of fi. For almost every RISC-V instruction, the generated lookup table is decomposable, and Lasso applies.

Reexamining the Wisdom Recognized in SNARK Design

Lasso and Jolt also disrupt some of the recognized wisdom in SNARK design.

#1. Large-scale SNARK is a waste. Everyone should use FRI, Ligero, Brakedown, or variations because they avoid the typically expensive elliptic curve techniques used in large-scale SNARKs.

The field size here roughly corresponds to the size of the numbers appearing in SNARK proofs. Since the cost of addition and multiplication with very large numbers is high, the idea that large-field SNARKs are a waste means we should aim to design protocols that never encounter large numbers. Commitments based on the Mulitiplicative Subgroup Membership (MSM) use elliptic curves, which are typically defined over large fields (around size 2^256), and these commitments are notorious for their poor performance.

Whether it makes sense to use small fields (even though they are an option) largely depends on the application. Lasso and Jolt go further. They demonstrate that even with MSM-based commitments, the work of the prover can be almost independent of the field size. This is because, for MSM-based commitments, the size of the commitment value matters more than the size of the field they reside in.

Existing SNARKs are limited by the layer of Elliptic Curve Arithmetic (ECA) and, as a result, commit to XY points on a curve. Since the size of the points on the curve determines the cost of ECA, they become a key contributor to the proof size and verification time. Existing SNARKs such as Groth16 and PLONK remove the XY commitment layer, resulting in significant speedups. However, these designs still commit to some curve points, and the point size still determines the cost. Moreover, existing proofs have circuit-specific public inputs that can only be made constant and thus lead to huge proofs if the input size is large. FRI-based SNARKs instead commit to functions of plaintexts where everything apart from the circuit-specific inputs is random. This provides a better balance as the amount of randomness is much larger than the inputs' contribution, so proofs stay small even for large circuit sizes.

Another way to understand this is through the existence of a trapdoor for each public input. The more points we commit to in the CRS, the more options we have when constructing the proof, making it easier to find a trapdoor. Conversely, if we commit to only one point, such as the CRS in Groth-Maller, we lose all flexibility, and the CRS becomes a hard-baked artifact that cannot be used for other circuits. Committing to a large set of points in the CRS, such as in Marlin, allows trapdoor construction for a large class of circuits (since the CRS size provides some inherent security against the trapdoor).

Some FRI-based SNARKs (using only a second layer through the FRI layer) enforce a single trapdoor commitment constraint, which greatly reduces the number of trapdoors even if the CRS contains many points. The downside, however,secondary title

is that here the soundness reduction reverse-oatypes SNARK ( qsp-soundness, drop-advantage ) proof may also be proven.

Other options include using an obfuscated commitment scheme, that is, creating fake commitments for use in original commitments, core commitments, and R1CS constraints. The main goal of these commitment schemes is to enable us to map commitments to specific physical constraints, as well as orchestrate them.

ARK makes Provers commit to many random field elements. For example, in the popular SNARK backend called Plonk, the Prover commits approximately 7 random field elements per circuit gate (along with other non-random field elements). These random field elements can be large even if all the values appearing in the computed circuit are small. In contrast, Lasso and Jolt only require Provers to submit smaller values (with Lasso Prover submitting fewer values than the previous lookup parameters). This greatly improves the performance of MSM-based schemes. At the very least, Lasso and Jolt should dispel the notion that the community should abandon MSM-based commitments when Prover performance is important. #2: Simpler instruction sets lead to faster zkVM. As long as the evaluation table for each instruction is decomposable, Jolt's (per instruction) complexity depends only on the input size of the instruction. Jolt demonstrates that surprisingly complex instructions are decomposable, particularly all RISC-V instructions. This is contrary to the common belief that a simpler virtual machine (zkVM) would necessarily result in smaller circuits and faster Provers (each step of the VM). This is the guiding motivation behind particularly simple zkVMs like the Cairo VM, which are designed specifically to be SNARK-friendly. In fact, for simpler virtual machines, Jolt achieves lower commitment costs for Provers compared to previous SNARKs. For instance, for the execution of the Cairo VM, a SNARK Prover commits 51 field elements per step of the VM. The SNARK deployed on Cairo currently operates on 251-bit fields (63 bits is the hard lower bound on field size SNARK can work with). Jolt's Prover focuses on approximately 60 field elements per step of a RISC-V CPU (defining data types larger than 64 bits). Taking into account the small size of the committed field elements, the cost for a Jolt Prover is roughly equivalent to the cost of submitting 6 random 256-bit field elements. #3: Decomposing large computations into small chunks incurs no performance loss. Based on this view, current projects decompose any large circuit into small chunks, prove each chunk separately, and recursively aggregate the results through SNARKs. These deployments use this approach to alleviate performance bottlenecks in popular SNARKs. For example,Based on FRI, SNARK requires close to 100 GB of proof space, even for small circuits. They also require FFT, which is a superlinear operation and could become a bottleneck if SNARK is applied to large circuits "simultaneously". In reality, some SNARKs, such as Lasso and Jolt, exhibit economies of scale (as opposed to the scale inefficiency in currently deployed SNARKs). This means that the larger the statement being proved, the smaller the prover's overhead relative to the work required to directly witness-check the prover circuit (i.e. evaluate the witness circuit without guaranteeing correctness). At a technical level, economies of scale come from two sources. Pippenger acceleration of MSM of size n: an improvement by a factor of log(n) compared to the naive algorithm. The improvement increases as n grows larger. In lookup parameters used by Lasso and others, the prover incurs a "one-time" cost that depends on the size of the lookup table but not on the number of values being looked up. The one-time prover cost is amortized across all lookups on the table. Larger chunks mean more lookups, which means better amortization. The popular approach today for processing large circuits is to break things down into as small pieces as possible. The main constraint on the size of each piece is that they can't be so small that recursively aggregating proofs becomes the prover's bottleneck. Lasso and Jolt propose an essentially opposite approach. One should use SNARKs with economies of scale. Then one takes a large computation and breaks it down into as large pieces as possible, and recurses on the result. The main constraint on fragment size is proof space, which grows as fragments get larger. #4: High constraints are necessary for efficient SNARKs. Jolt uses R1CS as an intermediate representation. Using "high" or "custom" constraints in Jolt doesn't add any benefit. The prover's cost in Jolt is mostly dominated by the lookup parameters in Lasso, not by the constraints that treat lookups as a given outcome constraint system's satisfiability. Jolt is generic, hence it doesn't lose generality. Therefore, it refutes the strong focus by developers on designing highly constrained (typically manually specified) systems in an effort to squeeze improved performance out of today's popular SNARKs.style="text-align: left;">Of course, certain contexts may still benefit from height or custom constraints. An important example is the Minroot VDF, whose 5-degree constraints can reduce commitment costs by about 3 times.

# Sparse Polynomial Commitment Scheme is expensive and should be avoided as much as possible.

This is a major criticism of the recently introduced constraint system called CCS and its supporting SNARKs - Spartan and Marlin, which provide a clear overview of all popular constraint systems in practice today. 

This criticism is unfounded. In fact, the technical core of Lasso and Jolt is the sparse polynomial commitment scheme in Spartan - Spark. Spark itself is a general transformation of any (non-sparse) polynomial commitment scheme into a scheme that supports sparse polynomials.  

Lasso optimizes and extends Spark to ensure that the prover only commits to "small" values, but even without these optimizations, the criticism is unreasonable. In fact, the prover in Spartan commits fewer (random) field elements than in SNARKs (e.g., Plonk, which avoids sparse polynomial commitments).

Spartan's approach has additional performance advantages, particularly for circuits with repetitive structures. For these circuits, the addition gates do not increase the computational burden on Spartan provers. This work only grows with the number of variables in the given constraint system, and not with the number of constraints. Unlike Plonk, Spartan provers do not need to submit multiple different "copies" of the same variable.

We are optimistic that Lasso and Jolt will greatly change the design of SNARKs, improving performance and auditability. This is a key tool with magical abilities to minimize the prover's commitment costs.

This article is translated from https://a16zcrypto.com/posts/article/introducing-lasso-and-jolt/Original linkIf reprinted, please indicate the source.

ODAILY reminds readers to establish correct monetary and investment concepts, rationally view blockchain, and effectively improve risk awareness; We can actively report and report any illegal or criminal clues discovered to relevant departments.

Recommended Reading
Editor’s Picks