/bsp/ - bsp

Someone's Office

Index Catalog Archive Bottom Refresh
+
-
Name
Options
Subject
Message

Max message length: 12000

files

Max file size: 32.00 MB

Total max file size: 50.00 MB

Max files: 5

Supported file types: GIF, JPG, PNG, WebM, OGG, and more

E-mail
Password

(used to delete files and posts)

Misc

Remember to follow the Rules

The backup domains are located at 8chan.se and 8chan.cc. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 3.0 (Temporarily Dead).

Ghost Screen
Celebrating its fifth anniversary all September


8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.

Cryptography Bibliography bsp Board owner 07/04/2025 (Fri) 01:10:12 Id: 2c5f87 No. 6
This thread is for collecting cryptographic techniques, protocols and implementations.
https://people.cs.georgetown.edu/jthaler/ProofsArgsAndZK.html Proofs, Arguments and Zero-Knowledge is a textbook about zero-knowledge proofs. It ends with a summary of what state-of-the-art ZKP looks like in practice. It cleanly separates many aspects of implementation that tend to be obscured behind the typical "marketing copy" that fills blockchain-oriented discussions of ZKP and discusses the technical aspects of implementation and use in depth. Especially useful from a ZKP end user's perspective are the last two chapters. The second-to-last chapter is "SNARK Composition and Recursion", which details how to combine ZKPs for recursive and non-recursive use. The last chapter is "Bird's Eye View of Practical Arguments" and explains the practical differences of different technical choices.
https://a16zcrypto.com/posts/article/secure-efficient-zkvms-progress/ Article conclusion: >zkVMs hold immense promise for making zero-knowledge proofs truly universal, but they remain in their infancy — rife with security challenges and debilitating performance overheads. Hype and marketing spin make it difficult to measure genuine progress. By articulating clear security and performance milestones, I hope to provide a roadmap that cuts through the noise. We’ll get there, but it will take time and sustained effort in both research and engineering. A summary of the status of zkVM research, written by a zkVM researcher frustrated by over-zealous marketing in his field. Notable highlights of his opinions: >zkVMs are highly complex software projects still riddled with bugs >still years away from meeting even the most basic goals for a secure and performant zkVM >protocol formal verification still not done in practice and will take years >verifier implementation formal verification is likely to take some more years to complete >Fiat-Shamir needs closer examination for practical security >post-quantum appears to be a red herring (note: in >>7 by the same author says that "quantum security depends only on the polynomial commitment scheme" in section 19.2) >100 bits of classical security is the "absolute bare minimum", 128 bits should be the ultimate expectation, anything less indicates the poor situation >"popular narratives" ignore important factors through misleading relative descriptions and not reporting baselines to ultimately hide 100,000x overhead >100,000x overhead and 2GB proof difficulty projected this year >hoping for proof sizes of about 256 KB as the maximum standard, is optimistic about a maximum 16ms verification time (described as "intentionally generous") projected in the next two years >performance results should be viewed as contingent until formal verification is complete
Edited last time by bsp on 07/04/2025 (Fri) 05:52:23.
>>8 It should be pointed out that zkVMs emulating CPUs and circuit-based ZKP systems suggest very different ZKP machinery. Notably, most CPUs are "stored-program" machines while circuit-based systems are "static-program" machines. There are obvious corollaries that any ZKP system supporting the implementation of zkCPUs is generally recursive, and can therefore execute a description of itself. Two more corollaries seem obvious to me: >the zkCPU system running the original ZKP system is one such self-description of that original ZKP system >the performance of an optimal such self-description of a ZKP system is at least as good as that of the zkCPU self-description, no matter the performance metric used. There's an obvious question: what performance spectrum can be expected from "zkFPGA" systems based directly on using field operations for circuit descriptions and the rest of the necessary ZKP machinery? zkFPGA seems like it should be a reasonable "baseline" mechanism. If this can be assumed and zkFPGA self-emulation is poor, then one could expect that using zkCPU self-emulation is similarly poor. Since zkCPU self-emulation would consist of the composition of two virtual machines: a ZKP on top of a CPU and a CPU on top of a ZKP, then one would intuitively expect comparable results from the reverse composition of a CPU-on-ZKP-on-CPU system. If the intuition holds, poor performance for a "perfect" self-emulator should translate to poor performance for CPU-on-ZKP or for ZKP-on-CPU. If a practical ZKP has poor self-emulation, one would expect decent ZKP-on-CPU performance and therefore lousy CPU-on-ZKP performance. So then why bother with a zkCPU? The only way I can see zkCPU being sensible is if it has an acceptably-small factor overhead compared to zkFPGA. I see two questions worth answering: >What is the performance of the zkFPGA approach? >Is there a reason to believe that zkCPU won't have a significant penalty compared to zkFPGA?
Two papers on lookup arguments and VMs built using them: Lasso and Jolt. I'll need to read these properly later. Particularly notable is the table on page 12 giving the costs of lookup arguments. I have to wonder what leads to the square root verification time.
>>8 >>9 I'm clearer now on why exactly zkVMs have been such a popular topic. The first issue is actually zero-knowledge itself. The zero-knowledge aspect of zk-SNARKs tends to be an expense in itself. However, doing a zero-knowledge proof directly inside a zero-knowledge proof is pointless: a non-ZK proof would be equally obscured at lower cost. An "optimal universal circuit" for a zk-SNARK would likely just be a ZKP wrapper checking a non-ZK SNARK. From that perspective, the real question isn't about efficient ZKP but about efficient proof in general. The second issue are special techniques such as "folding" and "lookup arguments" for transforming proofs in particular forms. If folding is effective but specifically applicable to zkCPU-styled designs, then the zkCPU does make sense. I'm not especially clear on the details of such techniques. This article discusses it in more detail: https://a16zcrypto.com/posts/article/jolt-6x-speedup/ >Here’s another way to frame Jolt’s performance: The Jolt prover now performs under 800 field multiplications per RISC-V cycle. In contrast, the Plonk prover applied to an arithmetic circuit commits to ~8 random values per gate, translating to at least 800 field multiplications per gate. This makes the Jolt prover faster per RISC-V cycle than the Plonk prover per gate. For most applications, this means the Jolt prover is much faster than popular SNARKs applied to hand-optimized circuits. >Why is this true? Because SNARKs like Plonk and Groth16 fail to exploit repeated structure at multiple levels. First, their prover costs don’t improve when applied to circuits with internal structure — they’re just as slow on structured circuits as on arbitrary ones. Second, encoding computations as circuits is already a mistake in many contexts. Batch-evaluation arguments like Shout do the opposite: They exploit repeated evaluation of the same function to dramatically reduce prover cost. And batch-evaluation is exactly what VMs do: They execute the same small set of primitive instructions over and over, one per cycle. So while a VM abstraction introduces some overhead compared to hand-optimized circuits, the structure it imposes enables optimizations that more than make up for it. For now, I don't think it's worth pursuing this particular approach. I think there will be better opportunities to exploit the same techniques later without resorting to the heavy zkVM approach, and Google's work on the topic both benchmarks well and is likely to be the subject of ongoing scrutiny and auditing in a way that Jolt can't be while it's still a moving target. Auditability is priority for my work, and some of the designs I have in mind are intended to take full advantage of such mechanisms when they are mature. In addition, there are "in-house" elements that work better with Google's work than with Jolt, as well as other issues with Jolt and its dependencies. I've considered the issue of using a zkVM and my conclusion is that it would ultimately end up a distraction in practice. I would also like to point out that Jolt's zkVM is either not zero-knowledge or not universal in the strictest sense. There's a leak of information via proof size, unless the length of execution is part of the public input. It's not clear how possible it is to resolve this issue, and it's also not clear whether or not it's fatal in practice. It does forbid some applications, however. Long term I think the idea of non-ZK SNARKs could be very useful. The ability to provide succinct proofs allows communicating the truth of something at low cost. This may allow for verification in new contexts and may help ensure that actors are not malicious in non-ZK contexts at low expense. Being able to prove, for example, that the result of a particular program yields a particular file could allow a BitTorrent-like protocol to verifiably update compression and decompression, and could allow for heterogeneous networked storage. This could remove or eliminate various hidden costs associated with file storage.


Forms
Delete
Report
Quick Reply