Compiled edited by KarenZ, Foresight News
On February 25, the Ethereum Foundation Research Team held its 13th AMA on Reddit. Foresight News read over 300 comments and compiled and summarized the main views of Vitalik Buterin and members of the Ethereum Foundation Research Team. The discussion mainly included L1 revenue and value accumulation, L2, blob fees, L1 Gas limit targets, the risk of large companies taking over Ethereum, and the progress of Pectra upgrades, among other future plans.
Cost
Question: The blob fee model seems a bit unsatisfactory and somewhat oversimplified as it sets the minimum fee to be the Ethereum minimum that exists in the protocol (1 Wei). Given how the price mechanism of EIP-1559 works, we may see a long period of no blob fees while we scale blobs significantly. This seems less than ideal, we should incentivize the use of blobs but not make them free to use on the network. Given this, are there plans to restructure the blobs fee model? If so, in what ways? What alternative fee mechanisms or adjustments are being considered?
Vitalik Buterin: I do think we should keep the protocol simple, avoid over-adapting to short-term situations, and coordinate the logic of executing gas and blobs in the gas market. Ethereum Improvement Proposal 7706 (EIP-7706) has this as one of its two main focuses (the other is adding a separate gas dimension for calldata).
Ansgar Dietrichs: A possible solution was proposed by Max Resnick in EIP-7762 . The proposal proposes setting the minimum fee at a level low enough that it is effectively zero cost during periods of low network congestion, but high enough that fees can be raised more quickly when demand increases. This proposal was made relatively late in the development cycle of the Pectra hard fork, and implementing it could have risked delaying the hard fork. We submitted this matter to RollCall #9 to assess whether this issue is serious enough to justify a possible delay in the hard fork, which can be seen at: https://github.com/ethereum/pm/issues/1172. The feedback we received indicates that the L2 side no longer considers this to be a pressing issue. Based on this feedback, we have decided to maintain the current model in the Pectra hard fork. However, if there is enough demand in the ecosystem, this may still be a viable feature option for future hard forks.
Dankrad Feist: Concerns about blob fees being too low are greatly exaggerated and short-sighted. However, in the short term, I do think that setting a higher minimum price for blobs would be a better option.
Justin Drake: Yes, EIP-7762 would increase MIN_BASE_FEE_PER_BLOB_GAS from 1 WEI to something higher, like 2**25 WEI.
Question: What are the Ethereum Foundation’s plans for improving scalability and reducing mainnet transaction fees in the coming years?
Vitalik Buterin:
Extended L2: more blobs (eg PeerDAS in Fusaka).
Continue to improve interoperability and cross-L2 user experience (see, for example, the recent Open Intents framework).
Moderately increase the L1 gas limit: Click here to understand the basic reasons.
Ethereum’s value accumulation and price issues
Question: L2 expansion has caused L1 to suffer a significant loss in value accumulation, which has also affected ETH. In addition to L2 will eventually destroy more ETH and conduct more transactions, what plans do you have to solve this problem?
Justin Drake: Blockchains (whether L1 or L2) typically have several sources of revenue. The first is congestion fees, or “base fees.” The second is competition fees, or “MEV” (maximum extractable value).
Lets talk about contention fees first. In my opinion, as modern applications and wallet designs evolve, MEV will increasingly flow upstream and be recaptured by applications, wallets, and/or users. Eventually, almost all MEV will be recaptured by entities closer to the originator of the traffic, and downstream infrastructure like L1 and L2 will only get a tiny bit from contention fees. In other words, in the long run, it may be futile for L1 and L2 to chase MEV.
What about congestion fees? For Ethereum L1, the bottleneck has historically been EVM execution. Consensus participant considerations, such as disk I/O and state growth, are key drivers for setting smaller execution gas limits. With modern blockchain designs that use SNARKs or fraud proof games for scaling, we will increasingly live in a world of post-execution scarcity. The bottleneck then shifts to data availability (DA), which is inherently scarce since Ethereum validators run on limited home internet connections, and in practice DAS only provides a linear ~100x scalability improvement, unlike fraud proofs and SNARKs which provide nearly unlimited scalability improvements.
So lets dive into DA economics, which I believe is the only sustainable source of revenue for L1. EIP-4844, which significantly increased DA supply via blobs, went into effect less than a year ago. The chart in the dashboard titled Average Number of Blobs per Block clearly shows the growth in blob demand over time (which I believe is driven primarily by induced demand), with demand gradually growing from 1 blob per block to 2 blobs per block to 3 blobs per block. We are now saturating blob supply, but are only in the early stages of blob price discovery, and low-value junk transactions are being gradually squeezed out by more economically dense transactions.
If DA supply remained constant for a few months, I would expect hundreds of ETH to be burned per day for DA. However, currently Ethereum L1 is in growth mode and the Pectra hard fork (to be launched in a few months) will increase the target number of blobs per block from 3 to 6. This surge in DA supply should depress the blob fee market and it will take a few months for demand to catch up again. As full danksharding is rolled out in the next few years, there will be a cat and mouse game between DA supply and demand.
What will the long-term equilibrium look like? My thesis has not changed since my 2022 Devcon talk Ultra-Sound Money. In the long run, I expect DA demand to exceed supply. In fact, supply is fundamentally limited by consensus participants running on home internet connections, and I think the DA throughput equivalent to about 100 home internet connections is insufficient to meet global demand, especially as humans always find creative ways to consume more bandwidth. In about 10 years, I expect Ethereum to reach 10 million TPS (about 100 transactions per person per day), which is $1 billion in revenue per day even if each transaction is as low as $0.001.
Of course, DA income is only part of ETH’s long-term value accumulation. Two other important considerations are issuance and currency premium.
Dankrad Feist: All blockchains have a value accumulation problem, and there is no perfect solution. Execution layers fare slightly better than data layers because they can extract priority fees that reflect the urgency of transactions, while the data layer only charges a fixed fee. My answer to value accumulation is to create value first. While creating value, we should maximize those opportunities that may be charged in the future. This means maximizing the value of the Ethereum data layer to increase the value of Ethereum as a whole, thus eliminating the need for alternative data availability (alt DA); expanding L1 so that high-value applications can actually run on L1; and encouraging projects like EigenLayer to expand the use of Ethereum as (non-financial) collateral.
Question: If the price of Ethereum falls below a certain level, will the economic security of ETH be threatened?
Justin Drake: If we want Ethereum to be truly resistant to attacks (including attacks from nation-states), high economic security is essential. Currently, Ethereum has about $80 billion in economic security (slashable), which is the largest of all blockchains (33,644,183 ETH staked, current ETH price is $2,385). In comparison, Bitcoins economic security is about $10 billion (non-slashable).
Question: What is the ticker?
Justin Drake: At least for me its ETH. I also hold some BTC, mainly for sentimental reasons and as a collection.
L2 Aspects
Question: Regarding L2 interoperability, many websites (e.g. Aave, Uniswap) and wallets (e.g. MetaMask, Trust Wallet) now have increasingly long drop-down menus to select different L2 networks, which is a poor user experience. When will we see these drop-down menus completely disappear?
Vitalik Buterin: I hope that chain-specific addresses will reduce the need for these kinds of drop-down menus in many scenarios. You can paste an address like eth:ink:0x 12345...67890, and the application will immediately know that you want to interact with Ink and perform the corresponding operations on the backend. In many scenarios, this is a more application-specific problem, and best practices need to be found to make these complexities as invisible to users as possible. Another long-term possibility is better cross-L2 interoperability, allowing more DeFi applications to run on only one main L2.
Question: Given the sentiment in the Ethereum community, do you still believe that focusing on L2 solutions is the winning option? If you could go back in time, would you do anything differently?
Ansgar Dietrichs: In the long run, Rollups remain the only principled way to scale blockchains to the scale needed to be the base layer of the global economy. Looking back, I don’t think we’ve invested enough effort in the path to that end goal and the intermediate user experience. Even in a Rollup-centric world, L1 still needs to scale dramatically (as Vitalik recently outlined ). We should realize that continuing to advance the L1 scaling path in parallel with advancing L2 work will provide better value to users in the transition period.
My view is that Ethereum has not faced strong competition in a long time and has become a little complacent. The more intense competition we are seeing now has highlighted some misjudgments and forced us to provide an overall better product (not just a theoretically correct first principles solution). But yes, to reiterate, some form of Rollup is critical to achieving the scaling end game. The specific architecture is still evolving - for example, Justins recent native Rollup exploration shows that the specific approach is still in flux - but the general direction is clearly correct.
Dankrad Feist: I disagree with this answer in some ways. If you define Rollups as just DA and extended validation of execution, how are they different from execution sharding? In fact, we think of Rollups more as white label Ethereum. To be fair, this model frees up a lot of energy and money, and if we had focused only on execution sharding in 2020, we wouldnt have made such progress in zkEVM and interoperability research right now. Technically, we can implement anything we want now - a highly scalable L1, a more scalable sharded blockchain, or a Rollup base layer. In my opinion, the best option for Ethereum is a combination of the first and the third.
Future plans and discussions
Question: What types of applications will be designed for Ethereum in the short term (less than 1 year), 1 to 3 years, 4+ years timeline?
Ansgar Dietrichs: This is a very broad question, so I will give a (very) partial answer, looking at the broader trends.
I firmly believe that we are currently at a critical inflection point in crypto history. We are moving out of a long sandbox phase where cryptocurrencies were primarily focused on the inside - building internal tools, creating infrastructure, developing building blocks such as DeFi, but with limited connection to the real world. All of these are very important and valuable, but have little impact on the real world.
The current moment is consistent with both the maturity of the technology (there is still some work to do, but we have a rough grasp of how to build infrastructure to support billions of users) and the positive shift in the regulatory environment in the largest market (the United States). Taken together, I believe it is time for Ethereum and cryptocurrencies as a whole to move out of the sandbox stage.
This shift will require a fundamental shift across the ecosystem. The best articulation of this challenge I’ve come across is the “Real World Ethereum” vision by DC Posch: https://daimo.com/blog/real-world-ethereum. The core theme is to focus on building real products for people in the real world, using cryptocurrency as a facilitator rather than a selling point in itself. Importantly, all of this still preserves our core crypto values.
Currently, the main types of real-world products are stablecoins (which got started earlier due to fewer regulatory restrictions), with some smaller real-world impact success stories like Polymarket. In the short term, I expect stablecoins to leverage this first-mover advantage and grow further in size and importance.
In the medium term, I expect real-world activities to become more diverse: other real-world assets (such as stocks, bonds, and anything that can be represented on a blockchain). In addition to assets, I predict we will also see many new types of activities and products (e.g. mapping business processes on-chain, governance, further new mechanisms such as prediction markets).
All of this will take time, but the effort invested here will pay off in the long term. Focusing too much on continuing “sandbox” activities (e.g., Meme coins) may show more traction in the short term, but could risk being left behind as real-world Ethereum takes off.
Carl Beekhuizen: In general, we focus on scaling the entire technology stack rather than designing for specific applications. The overall theme is scaling: how do we build the most powerful platform while remaining decentralized and censorship-resistant.
In the short term (<1 year), the main focus is on launching PeerDAS, which will allow us to significantly increase the number of blobs per block. We are also improving the EVM: hopefully we can get EOF out soon. A lot of research is going into statelessness, EOF, gas repricing, ZK-ing the EVM, etc.
Over the next 1-3 years we will be scaling blob throughput further and launching some of the research projects listed above, including further development of zkEVM (zero-knowledge proof EVM) initiatives such as ethproofs.org.
Looking ahead 4 years and beyond, our vision is to add a bunch of extensions to the EVM (which L2 will also adopt and get accelerated), blob throughput will increase dramatically, we will have improvements in censorship resistance (e.g. via FOCIL), and speed everything up further with some ZK (zero-knowledge proofs).
Question: There is this view that one day Ethereum mainnet should solidify and innovation should happen at the L2 level, but at the same time we keep seeing new research (such as execution tickets, APS, one-time signatures, etc., and the Ethereum Foundation is also driving this research, which is great, the playing field is constantly changing, and in my experience, digital products are never finished. In other words, how likely is it that we will still need to make adjustments after Vitaliks roadmap/beacon chain implementation?
BUTERIN: Ideally, we can separate the parts that can be fixed from the parts that need to evolve. We have already done this to some extent, with the separation of execution/consensus (with more aggressive pushes for consensus, including Justin Drakes recent idea of a full beacon chain upgrade). I expect these specifications to continue to evolve. In addition, I think there is light at the end of the tunnel for many technical problems, as the pace of research has indeed slowed down compared to about 5 years ago, and the focus has been more on incremental improvements recently.
Question: Vitalik commented in a recent article on Verge: We will also soon face a decision point as to which of the following three options to choose: (i) Verkle trees, (ii) STARK-friendly hash functions, (iii) conservative hash functions. Has the decision been made on which path to take?
Vitalik Buterin: Its still under discussion. My personal impression is that the mood has been leaning slightly towards (ii) over the last few months, but nothing has been decided yet. I also think its worth considering these options in the context of the overall roadmap they will be a part of. In particular, the most realistic options to me seem to be:
Option A:
2025: Pectra, and possibly EOF
2026: Verkle
2027: L1 execution optimization (e.g. delayed execution, multi-dimensional gas, repricing)
Option B:
2025: Pectra, and possibly EOF
2026: L1 execution optimization (e.g. delayed execution, multi-dimensional gas, repricing)
2027: Initial launch of Poseidon.
2028: More and more stateless clients over time.
Option B is also compatible with conservative hash functions; however, in this case I would still favor a gradual rollout, because even if the hash function is less risky than Poseidon, the proof system will still be riskier in the beginning.
Justin Drake: As Vitalik said, it is still under discussion. That being said, the long-term fundamentals clearly point to (ii). In fact, (i) does not have post-quantum security, and (iii) is inefficient.
Question: What are the recent developments in VDFs?
Dmitry Khovratovich: A 2024 paper revealed potential attacks on the candidate VDF MinRoot, showing that calculations can be accelerated on multi-core machines, breaking its sequential nature. There is currently a lack of efficient and secure VDF solutions (efficiency means that calculations can be performed on small hardware, and security means that calculations cannot be accelerated), and there is also a lack of reliable VDF candidate solutions. Therefore, the research and application of VDF has been temporarily shelved.
Question: Is there a desire to scale Ethereum 100x in the next year? What is the acceptance of simple parameter adjustments in the protocol? For example, shortening block time by 3x, doubling block limit, raising gas target, increasing number of blobs, etc.
Francesco DAmato: It is not realistic to expand Ethereum as a whole 100 times, but I think it is possible to expand blob throughput 100 times compared to before 4844. EIP-4844 has brought about 3 times expansion, Pectra is expected to bring another 2 times expansion, and Fusaka is targeting 4 to 8 times expansion. So we still need to expand another 2 to 4 times. I think we can definitely achieve this goal.
Question: What features does the Fusaka Glamsterdam upgrade have?
Barnabé Monnot: Fusaka seems to be mainly focused on PeerDAS, which is critical for scaling L2, and few people want to delay the delivery of Fusaka due to other features. I personally would like to see FOCIL in Glamsterdam, as well as Orbit, which will pave the way for us to move towards SSF (Single Slot Finality). The above is more focused on the Consensus Layer (CL) and Data Availability (DA), but in Glamsterdam, the Execution Layer (EL) should also work to drive L1 scaling, and there are many discussions going on about which feature sets are best suited.
Question: Can an EIP “force” L2s to adopt phase 1 (or even phase 2) decentralization (given their slow progress toward decentralization)?
Vitalik Buterin: Native Rollups (such as EXECUTE precompilations) do this to some extent. L2s are still free to choose to ignore this feature and write their own code, or even add their own backdoors, but they will have access to a simple and highly secure proof system that is directly part of L1, so those L2s seeking EVM compatibility are likely to choose this option.
Question: After Fusaka/Glamsterdam, what research might be ready for development upgrades?
Toni Wahrstätter: PeerDAS is being worked on intensively, along with proposals such as EOF, FOCIL, ePBS, SECP 256 r 1 precompile and delayed execution. PeerDAS is now ready for inclusion in the Fusaka upgrade, and there seems to be broad consensus on its urgency. The other proposals mentioned above are possible candidates for the Glamsterdam upgrade, but the specific EIPs that will be included in the upgrade have not yet been finalized.
Question: Vitalik has written about proposed steps to take in the event of a quantum emergency. How would we determine if we are in a quantum emergency?
Vitalik Buterin: In reality, based on a combination of media, expert opinion, and Polymarket market forecasts, there is a “real” (i.e., able to crack 256-bit elliptic curve cryptography) quantum computer. If the timeline is within 1-2 years, then it is definitely urgent; if it is around 2 years, then it is not urgent, but still urgent enough for us to put aside other roadmap priorities and integrate all quantum-resistant technologies into real-time protocols first.
Question: What is the gas limit target for L1 in 2025?
Toni Wahrstätter: There are many different opinions on the gas limit, but it ultimately comes down to one key question: should we scale Ethereum L1 by raising the gas limit, or should we focus on L2 and enable more data blocks (blobs) through advanced technologies like DAS (Data Availability Sampling)?
Vitalik recently published a blog post discussing the possibility of modest expansion of L1, in which he listed reasons why raising the gas limit might make sense. However, increasing the gas limit also comes with trade-offs: higher hardware requirements; state and history data growth; bandwidth.
On the other hand, Ethereum’s Rollup-centric scaling vision aims to achieve greater scalability without increasing node hardware requirements. Technologies like PeerDAS (short term) and Full DAS (medium to long term) are expected to unlock significant scaling potential while keeping resource requirements reasonable.
Having said that, I would not be surprised if validators push the gas limit up to 60 million after the Pectra hard fork in April. But in the long run, the main focus of scaling will likely be on DAS-based solutions rather than just increasing the gas limit.
Question: If the Ethereum beam client experiment (or whatever it ends up being called) is successful and in 2-3 years we have several working Ethereum beam client implementations, will we need to go through a phase where current PoS and beam PoS run in parallel and both receive staking rewards, just like we had a period of PoW + PoS parallelism before the PoS transition?
Vitalik Buterin: I think we can just do an immediate upgrade. The reason why two chains need to run in parallel when merging is:
PoS as a whole is untested, and we need time to get the entire PoS ecosystem up and running long enough to feel confident switching to it.
PoW may undergo reorganization, and the switching mechanism needs to be robust to this.
PoS has finality, and much of the infrastructure (e.g. staking) will carry over. Therefore, we can simply do a large hard fork to switch the validation rules from the beacon chain to the new design. Perhaps at the precise moment of the switch, the economic finality guarantees may not be fully met, but in my opinion, this is a small and acceptable price to pay.
Question: The Ethereum Foundation has launched a $2 million academic funding program through 2025. What specific research areas are prioritized? How does the Foundation plan to integrate academic research results into the broader Ethereum development roadmap?
Fredrik Svantes: Here is a wish list: https://www.notion.so/efdn/17bd9895554180f9a9 c 1 e 98 d 1 eee 7 aec.
Some research directions that the Protocol Security Team is interested in include:
P2P security: Many of the vulnerabilities we found were related to denial of service attack vectors at the network layer (eg lib p2p or dev p2p), so improving security in this area would be very valuable.
Fuzzing: We are currently fuzzing the EVM, consensus layer clients, etc., but there are definitely more areas to explore (e.g. the networking layer).
Understand the risks of Ethereum’s current reliance on the supply chain.
How to use LLM (Large Language Model) to improve protocol security (such as code auditing, automated fuzz testing tools, etc.).
other
Question: What applications would you most like to see in the Ethereum ecosystem?
Toni Wahrstätter: In my opinion, application developers on Ethereum have done an excellent job of identifying actual user needs and meeting those needs - even if L1 or L2 may not be fully ready to support certain applications. I am particularly interested in applications that combine self-custody with privacy, and there are some great solutions already. Two prominent examples are Umbra and Fluidkey, both of which use stealth addresses to bring more privacy to everyday user interactions. In addition, applications like Railgun, Tornado Cash, and Privacy Pools provide significant value by enhancing on-chain privacy. Coming back to your question, I would like to see more wallets make privacy the default setting instead of letting users actively choose it, while still maintaining a great user experience (which is harder than people think).
Question: Aren’t you worried about the risk of big companies taking over Ethereum?
Vitalik Buterin: Yes, it is definitely an ongoing concern, and I think the role of the Ethereum Foundation should be to proactively address these risks. The goal is to maintain the neutrality of Ethereum, not the neutrality of the Ethereum Foundation - usually the two align, but sometimes they do, and when that happens, we should prioritize the former. The main risks I see right now are centered around L2 and the wallet layer, as well as staking and custodial providers. The Ethereum Foundation has recently begun to intervene in the first two areas, driving adoption of interoperability standards. That being said, there are definitely opportunities for us to be more proactive in mitigating risk, and we are exploring various options.
Question: Why is the Ethereum Foundation (EF) so opaque? There is very little transparency and accountability to the community.
Justin Drake: What do you want to know? The Ethereum Foundation Research team has two AMAs per year, and a full list of 40 researchers is available at Research.Ethereum.Foundation. Our research is conducted in the public, for example at Ethresear.ch.
Question: What do you think about the future of hardware wallets?
Justin Drake: In the future, most hardware wallets will run in the phone enclave (rather than a separate device like the Ledger USB). With account abstraction, it is already possible to leverage infrastructure like passkeys. I expect to see native integration (e.g. in Apple Pay) within a decade.
Vitalik Buterin: Hardware wallets need to be “truly secure” in several key aspects:
Secure Hardware: Built on an open source and verifiable hardware stack (see, for example, IRIS) to mitigate the risks of: (i) intentional backdoors; and (ii) side-channel attacks.
Interface-layer security: Hardware wallets should provide enough transaction information to prevent connected computers from tricking you into signing something you didn’t mean to.
Wide availability: Ideally, we could make a device that is both a cryptocurrency hardware wallet and a security device for other purposes, which would encourage more people to actually buy it and use it, rather than forgetting it exists.