Developers refute Vitalik: The premise is wrong, RISC-V is not the best choice

avatar
Azuma
4 hours ago
This article is approximately 1151 words,and reading the entire article takes about 2 minutes
Scalability and maintainability cannot be achieved at the same time.

This article comes from: Ethereum developer levochka.eth

Compiled by Odaily Planet Daily ( @OdailyChina ); Translated by Azuma ( @azuma_eth )

Editor’s Note:

Yesterday, Ethereum co-founder Vitalik published a radical article about Ethereum’s execution layer upgrade (see “ Vitalik’s radical new article: Execution layer expansion requires ‘breakthrough’, EVM must be iterated in the future ”). The article mentioned the hope to use RISC-V to replace EVM as the virtual machine language for smart contracts.

As soon as this article came out, it immediately caused an uproar in the Ethereum developer community, and many technical leaders expressed different opinions on the plan. Shortly after the article was published, levochka.eth, a first-line Ethereum developer, wrote a long article below the original article to refute Vitaliks point of view, arguing that Vitalik made wrong assumptions about the proof system and its performance, and that RISC-V could not take into account both scalability and maintainability, and might not be the best choice.

The following is the original content of levochka.eth, translated by Odaily Planet Daily.

Developers refute Vitalik: The premise is wrong, RISC-V is not the best choice

Please dont do this.

This plan is not sound because you make incorrect assumptions about the proof system and its performance.

Verify the hypothesis

As far as I understand, the main arguments for this approach are scalability and maintainability.

First, I want to discuss maintainability.

In fact, all RISC-V zk-VMs require the use of precompiles to handle computationally intensive operations . The list of precompiles for SP 1 can be found in Succinct s documentation, and you will find that it covers almost all important computational opcodes in the EVM.

Therefore, all modifications to the base layer cryptographic primitives require new “circuits” to be written and audited for these precompilations, which is a severe limitation.

Indeed, if performance is good enough, it might be relatively easy to maintain the non-EVM part of the implementation client. Im not sure if performance will be good enough, but Im less confident about this part for the following reasons:

  • State tree computation can indeed be greatly accelerated by friendly precompilation (such as Poseidon).

  • But it is not clear whether deserialization can be handled in an elegant and maintainable way.

  • In addition, there are some tricky details (such as gas metering and various checks) that may belong to block evaluation time but should actually be classified as non-EVM parts, and these parts are often more prone to maintenance pressure.

Next, the part about scalability.

I need to reiterate that there is no way RISC-V can handle EVM workloads without using precompilation, absolutely not.

So, the statement in the original article that final proof time will be dominated by current precompile operations, while technically correct, is overly optimistic - it assumes that there will be no precompiles in the future, when in fact (in this future scenario) precompiles will still exist and are completely consistent with the computationally intensive opcodes in the EVM (such as signing, hashing, and possibly large numerical analog operations).

Regarding the Fibonacci example, it’s hard to judge without going into very low-level details , but its strength comes at least in part from:

  • The difference between interpretation and execution overhead;

  • Loop unrolling (reduces control flow for RISC-V, not sure if Solidity can do this, but even a single opcode will still generate a lot of control flow/memory accesses due to interpretation overhead);

  • Use smaller data types;

Here I would like to point out that to achieve the advantages of points 1 and 2, the interpretation overhead must be eliminated. This is consistent with the concept of RISC-V, but this is not the RISC-V we are currently discussing, but a similar (?) RISC-V - it needs to have some additional capabilities, such as supporting the concept of contracts.

Here comes the problem

So, there are some problems here.

  • To improve maintainability, you need a RISC-V (with precompilation) that can compile the EVM - this is basically the current situation.

  • Improving scalability requires something completely different - a new architecture, perhaps like RISC-V, that understands the concept of contracts, is compatible with the limitations of the Ethereum runtime, and can execute contract code directly (without interpretation overhead).

Im going to assume for now that you mean the second case (since the rest of the post seems to imply that). I should caution that all code outside of this environment will still be written in the current RISC-V zkVM language, which has significant maintenance implications.

Other possibilities

We can compile bytecodes from high-level EVM opcodes. The compiler is responsible for ensuring that the generated program maintains invariants, such as no stack overflows. I would like to see this demonstrated in the vanilla EVM. A properly compiled SNARK can be provided along with the contract deployment instructions.

We can also construct a formal proof that certain invariants hold. As far as I know, this approach (rather than virtualization) has been used in some browser contexts. By generating a SNARK of this formal proof, you can also achieve similar results.

Of course, the easiest option is to just go for it...

Building a minimal chained MMU

You may have implied this in your post, but let me remind you: if you want to eliminate virtualization overhead, you have to execute compiled code directly - which means you need to somehow prevent the contract (now an executable program) from writing to kernel (non-EVM implementation) memory.

Therefore, we need some kind of memory management unit (MMU). The paging mechanism of traditional computers may not be necessary because the physical memory space is nearly infinite. This MMU should be as lean as possible ( because it is at the same level of abstraction as the architecture itself ), but some functions (such as atomicity of transactions ) can be moved to this layer.

At this point, the provable EVM will become the kernel program running on this architecture.

RISC-V may not be the best choice

Interestingly, under all these conditions, the best instruction set architecture (ISA) for this task may not be RISC-V, but something like EOF-EVM , for the following reasons:

  • “Small” opcodes actually result in a large number of memory accesses, which are difficult to handle efficiently with existing proof methods.

  • To reduce branching overhead, our recent paper Morgana shows how to prove code with “static jumps” ( similar to EOF ) at precompiler-level performance.

My suggestion is to build a new architecture that is friendly to proofs, with a minimal MMU that allows contracts to be run as separate executables. I dont think it should be RISC-V, but rather a new ISA optimized for the limitations of the SNARK protocol , or even an ISA that partially inherits a subset of the EVM opcodes might be better - as we know, precompilation will always exist whether we like it or not, so RISC-V does not bring any simplification here.

This article is translated from https://ethereum-magicians.org/t/long-term-l1-execution-layer-proposal-replace-the-evm-with-risc-v/23617/5Original linkIf reprinted, please indicate the source.

ODAILY reminds readers to establish correct monetary and investment concepts, rationally view blockchain, and effectively improve risk awareness; We can actively report and report any illegal or criminal clues discovered to relevant departments.

Recommended Reading
Editor’s Picks