Interpretation of MegaETH Whitepaper

Intermediate7/9/2024, 6:23:30 PM
While interpreting the MegaETH whitepaper, it's evident that the document often reveals a technical nerd's rigor and an overabundance of details. It discusses the current limitations of blockchain technology and how MegaETH aims to address these issues. By publicly disclosing detailed technical data and test results, it seeks to enhance the project's transparency and credibility, providing the technical community and potential users with a deeper understanding and trust in the system's performance. By specializing nodes and separating transaction execution tasks from the responsibilities of full nodes, MegaETH reduces consensus overhead.

Infrastructure never sleeps; there are more chains than applications.

While the market suffers from the torment of airdrops from various “king projects,” the primary market is still racing to create the next “king.”

Last night, another high-profile Layer 2 project emerged — MegaETH. It raised $20 million in seed funding, led by Dragonfly with participation from Figment Capital, Robot Ventures, and Big Brain Holdings. Angel investors include Vitalik, Cobie, Joseph Lubin, Sreeram Kannan, and Kartik Talwar.

With top-tier VCs leading the funding round and industry giants like Vitalik as angel investors, and a project name that directly includes “ETH,” all of these tags aim to establish “legitimacy” in a crypto market with limited attention.

According to the official project description, MegaETH can be summarized with a familiar word — Fast.

As the first real-time blockchain, it promises lightning-fast transaction speeds, sub-millisecond latency, and more than 100,000 transactions per second…

In a market where all participants are fatigued by narratives about blockchain performance, how does MegaETH stand out?

We dug into MegaETH’s whitepaper to find the answer.

Many Chains, But None Can Achieve “Real-Time”

Aside from narratives and hype, why does the market need a blockchain like MegaETH?

MegaETH’s own answer is that simply creating more chains doesn’t solve the scalability problem of blockchains. Current L1 and L2 solutions face common issues:

  • All EVM chains exhibit low transaction throughput;
  • Due to limited computational resources, complex applications can’t be deployed on-chain;
  • Applications requiring high update rates or rapid feedback loops are infeasible with long block times.

In other words, current blockchains can’t achieve:

  • Real-time settlement: Transactions are processed immediately upon reaching the blockchain, and results are published almost instantly.
  • Real-time processing: The blockchain system can process and verify a large number of transactions in an extremely short amount of time.

What does this real-time capability look like in practical applications?

For instance, high-frequency trading requires the ability to place and cancel orders within milliseconds. Similarly, real-time combat or physics simulation games need blockchains that can update states at extremely high frequencies. Clearly, current blockchains can’t achieve this.

Node Specialization and Real-Time Performance

So, how does MegaETH achieve the aforementioned “real-time” capabilities? In short:

Node specialization: By separating transaction execution tasks from the responsibilities of full nodes, MegaETH reduces consensus overhead.

To be more specific, MegaETH features three main roles: sequencers, provers, and full nodes.

In MegaETH, only one active sequencer handles transaction execution at any given time. Other nodes receive state differences via the P2P network and update their local states without re-executing transactions.

The sequencer is responsible for ordering and executing user transactions. However, at any given time, MegaETH has only one active sequencer, eliminating the consensus overhead during normal execution.

Provers use stateless verification to verify blocks asynchronously and unordered.

A simplified workflow of MegaETH is as follows:

  1. Transaction Processing and Sequencing: User-submitted transactions are first sent to the Sequencer, which processes these transactions in order, generating new blocks and witness data.

  2. Data Publication: The Sequencer publishes the generated blocks, witness data, and state differences to EigenDA (Data Availability Layer), ensuring this data is available across the network.

  3. Block Verification: The Prover Network fetches blocks and witness data from the Sequencer, verifies them using specialized hardware, generates proofs, and returns them to the Sequencer.

  4. State Updates: The Fullnode Network receives state differences from the Sequencer, updates local states, and can verify the validity of the blocks through the Prover Network, ensuring blockchain consistency and security.

Measure First, Then Execute

From other contents of the whitepaper, MegaETH itself realized that while the idea of “Node Specialization” is good, it doesn’t mean that it can be easily put into practice.

When it comes to building the chain, MegaETH has an interesting approach: measure first, then execute. That is, conduct in-depth performance measurements to identify the real problems of existing blockchain systems before figuring out how to apply the node specialization approach to solve these issues.

So, what issues did MegaETH identify?

The following part might be too technical for the average reader, so feel free to skip to the next section if you find it less engaging.

  • Transaction Execution: Their experiments show that even with powerful servers equipped with 512GB of memory, the existing Ethereum execution client Reth can only achieve about 1000 TPS (transactions per second) in real-time sync settings, indicating significant performance bottlenecks in current systems for executing transactions and updates.
  • Parallel Execution: Despite the hot concept of parallel EVM, there are still unresolved performance issues. The acceleration effect of parallel EVM in actual production is limited by the parallelism of workloads. MegaETH’s measurements show that the median parallelism of recent Ethereum blocks is less than 2, even when multiple blocks are combined, the median parallelism only increases to 2.75.

(A parallelism of less than 2 means that, in most cases, there are fewer than two transactions per block that can be executed simultaneously. This indicates that most transactions in current blockchain systems are interdependent and cannot be processed in parallel on a large scale.)

  • Interpreter Overhead: Even the faster EVM interpreters, like revm, are still 1-2 orders of magnitude slower than native execution.
  • State Synchronization: Synchronizing 100,000 ERC-20 transfers per second requires consuming 152.6 Mbps of bandwidth, and more complex transactions require even more bandwidth. Updating the state root in Reth consumes 10 times more computational resources than executing transactions. In simpler terms, current blockchain resource consumption is quite high.

After identifying these issues, MegaETH began to address them with targeted solutions, which aligns with the solution logic mentioned above:

  1. High-Performance Sequencer:
  • Node Specialization: MegaETH improves efficiency by assigning tasks to specialized nodes. Sequencer nodes handle transaction ordering and execution, full nodes manage state updates and validation, and prover nodes verify blocks using dedicated hardware.
  • High-End Hardware: Sequencers use high-performance servers (e.g., 100 cores, 1TB memory, 10Gbps network) to handle large volumes of transactions and generate blocks quickly.
  1. State Access Optimization:
  • In-Memory Storage: Sequencer nodes are equipped with large amounts of RAM, capable of storing the entire blockchain state in memory, eliminating SSD read latency and speeding up state access.
  • Parallel Execution: Although the acceleration effect of parallel EVM in existing workloads is limited, MegaETH optimizes the parallel execution engine and supports transaction priority management to ensure critical transactions are processed promptly during peak times.
  1. Interpreter Optimization:
  • AOT/JIT Compilation: MegaETH introduces Ahead-Of-Time (AOT) and Just-In-Time (JIT) compilation techniques to accelerate the execution of compute-intensive contracts. Although the performance improvements for most contracts in production environments are limited, these techniques can significantly enhance performance in specific high-compute scenarios.
  1. Status Synchronization Optimization:
  • Efficient Data Transmission: MegaETH designs an efficient state difference encoding and transmission method, capable of synchronizing large state updates with limited bandwidth.
  • Compression Technology: By adopting advanced compression techniques, MegaETH can synchronize state updates for complex transactions (like Uniswap swaps) within bandwidth constraints.
  1. State Root Update Optimization:
  • Optimized MPT Design: MegaETH employs an optimized Merkle Patricia Trie (such as NOMT) to reduce read/write operations and improve the efficiency of state root updates.
  • Batch Processing: By batch processing state updates, MegaETH can reduce random disk I/O operations and improve overall performance.

The above content is quite technical, but beyond these technical details, you can see that MegaETH truly has some technical prowess. And one clear motivation is:

By publicly sharing detailed technical data and test results, MegaETH aims to enhance the project’s transparency and credibility, allowing the technical community and potential users to gain a deeper understanding and trust in its system’s performance.

Prestigious Team, Freqeuntly Favored ?

While analyzing the whitepaper, it’s clear that despite MegaETH’s somewhat flashy name, the documents and explanations often reveal a meticulous and overly detailed technical nerdiness.

Public information indicates that MegaETH’s team appears to have a Chinese background. The CEO, Li Yilong, holds a Ph.D. in Computer Science from Stanford. The CTO, Yang Lei, holds a Ph.D. from MIT. The CBO (Chief Business Officer), Kong Shuyao, has an MBA from Harvard Business School and has experience working at several industry institutions (such as ConsenSys). The head of growth shares some career overlap with the CBO and also graduated from New York University.

A team where all four members come from top U.S. universities naturally wields significant influence in terms of connections and resources.

Previously, in the article Graduate as CEO, Pantera Leads $25 Million Round for Nexus, we introduced Nexus’s CEO, who, despite being a fresh graduate, also hails from Stanford and appears to have a solid technical background.

Top VCs indeed have a preference for top-tier technologists from prestigious schools. With Vitalik also investing and the project name including “ETH,” the technical narrative and marketing impact are likely to be maximized.

In the current climate, where old “king projects” become “fallen kings,” and there is a lull in new projects and market activity, MegaETH is poised to trigger a new wave of FOMO.

We will continue to monitor and provide updates on the project’s testnet and interactions.

statement:

  1. This article is reproduced from [techflow], the original title is “Interpretation of the MegaETH white paper: Infrastructure never sleeps, what is so special about the huge financing L2 that Vitalik participated in?”, the copyright belongs to the original author [深潮TechFlow], if you have any objection to the reprint, please contact Gate Learn Team, the team will handle it as soon as possible according to relevant procedures.

  2. Disclaimer: The views and opinions expressed in this article represent only the author’s personal views and do not constitute any investment advice.

  3. Other language versions of the article are translated by the Gate Learn team, not mentioned in Gate.io, the translated article may not be reproduced, distributed or plagiarized.

Interpretation of MegaETH Whitepaper

Intermediate7/9/2024, 6:23:30 PM
While interpreting the MegaETH whitepaper, it's evident that the document often reveals a technical nerd's rigor and an overabundance of details. It discusses the current limitations of blockchain technology and how MegaETH aims to address these issues. By publicly disclosing detailed technical data and test results, it seeks to enhance the project's transparency and credibility, providing the technical community and potential users with a deeper understanding and trust in the system's performance. By specializing nodes and separating transaction execution tasks from the responsibilities of full nodes, MegaETH reduces consensus overhead.

Infrastructure never sleeps; there are more chains than applications.

While the market suffers from the torment of airdrops from various “king projects,” the primary market is still racing to create the next “king.”

Last night, another high-profile Layer 2 project emerged — MegaETH. It raised $20 million in seed funding, led by Dragonfly with participation from Figment Capital, Robot Ventures, and Big Brain Holdings. Angel investors include Vitalik, Cobie, Joseph Lubin, Sreeram Kannan, and Kartik Talwar.

With top-tier VCs leading the funding round and industry giants like Vitalik as angel investors, and a project name that directly includes “ETH,” all of these tags aim to establish “legitimacy” in a crypto market with limited attention.

According to the official project description, MegaETH can be summarized with a familiar word — Fast.

As the first real-time blockchain, it promises lightning-fast transaction speeds, sub-millisecond latency, and more than 100,000 transactions per second…

In a market where all participants are fatigued by narratives about blockchain performance, how does MegaETH stand out?

We dug into MegaETH’s whitepaper to find the answer.

Many Chains, But None Can Achieve “Real-Time”

Aside from narratives and hype, why does the market need a blockchain like MegaETH?

MegaETH’s own answer is that simply creating more chains doesn’t solve the scalability problem of blockchains. Current L1 and L2 solutions face common issues:

  • All EVM chains exhibit low transaction throughput;
  • Due to limited computational resources, complex applications can’t be deployed on-chain;
  • Applications requiring high update rates or rapid feedback loops are infeasible with long block times.

In other words, current blockchains can’t achieve:

  • Real-time settlement: Transactions are processed immediately upon reaching the blockchain, and results are published almost instantly.
  • Real-time processing: The blockchain system can process and verify a large number of transactions in an extremely short amount of time.

What does this real-time capability look like in practical applications?

For instance, high-frequency trading requires the ability to place and cancel orders within milliseconds. Similarly, real-time combat or physics simulation games need blockchains that can update states at extremely high frequencies. Clearly, current blockchains can’t achieve this.

Node Specialization and Real-Time Performance

So, how does MegaETH achieve the aforementioned “real-time” capabilities? In short:

Node specialization: By separating transaction execution tasks from the responsibilities of full nodes, MegaETH reduces consensus overhead.

To be more specific, MegaETH features three main roles: sequencers, provers, and full nodes.

In MegaETH, only one active sequencer handles transaction execution at any given time. Other nodes receive state differences via the P2P network and update their local states without re-executing transactions.

The sequencer is responsible for ordering and executing user transactions. However, at any given time, MegaETH has only one active sequencer, eliminating the consensus overhead during normal execution.

Provers use stateless verification to verify blocks asynchronously and unordered.

A simplified workflow of MegaETH is as follows:

  1. Transaction Processing and Sequencing: User-submitted transactions are first sent to the Sequencer, which processes these transactions in order, generating new blocks and witness data.

  2. Data Publication: The Sequencer publishes the generated blocks, witness data, and state differences to EigenDA (Data Availability Layer), ensuring this data is available across the network.

  3. Block Verification: The Prover Network fetches blocks and witness data from the Sequencer, verifies them using specialized hardware, generates proofs, and returns them to the Sequencer.

  4. State Updates: The Fullnode Network receives state differences from the Sequencer, updates local states, and can verify the validity of the blocks through the Prover Network, ensuring blockchain consistency and security.

Measure First, Then Execute

From other contents of the whitepaper, MegaETH itself realized that while the idea of “Node Specialization” is good, it doesn’t mean that it can be easily put into practice.

When it comes to building the chain, MegaETH has an interesting approach: measure first, then execute. That is, conduct in-depth performance measurements to identify the real problems of existing blockchain systems before figuring out how to apply the node specialization approach to solve these issues.

So, what issues did MegaETH identify?

The following part might be too technical for the average reader, so feel free to skip to the next section if you find it less engaging.

  • Transaction Execution: Their experiments show that even with powerful servers equipped with 512GB of memory, the existing Ethereum execution client Reth can only achieve about 1000 TPS (transactions per second) in real-time sync settings, indicating significant performance bottlenecks in current systems for executing transactions and updates.
  • Parallel Execution: Despite the hot concept of parallel EVM, there are still unresolved performance issues. The acceleration effect of parallel EVM in actual production is limited by the parallelism of workloads. MegaETH’s measurements show that the median parallelism of recent Ethereum blocks is less than 2, even when multiple blocks are combined, the median parallelism only increases to 2.75.

(A parallelism of less than 2 means that, in most cases, there are fewer than two transactions per block that can be executed simultaneously. This indicates that most transactions in current blockchain systems are interdependent and cannot be processed in parallel on a large scale.)

  • Interpreter Overhead: Even the faster EVM interpreters, like revm, are still 1-2 orders of magnitude slower than native execution.
  • State Synchronization: Synchronizing 100,000 ERC-20 transfers per second requires consuming 152.6 Mbps of bandwidth, and more complex transactions require even more bandwidth. Updating the state root in Reth consumes 10 times more computational resources than executing transactions. In simpler terms, current blockchain resource consumption is quite high.

After identifying these issues, MegaETH began to address them with targeted solutions, which aligns with the solution logic mentioned above:

  1. High-Performance Sequencer:
  • Node Specialization: MegaETH improves efficiency by assigning tasks to specialized nodes. Sequencer nodes handle transaction ordering and execution, full nodes manage state updates and validation, and prover nodes verify blocks using dedicated hardware.
  • High-End Hardware: Sequencers use high-performance servers (e.g., 100 cores, 1TB memory, 10Gbps network) to handle large volumes of transactions and generate blocks quickly.
  1. State Access Optimization:
  • In-Memory Storage: Sequencer nodes are equipped with large amounts of RAM, capable of storing the entire blockchain state in memory, eliminating SSD read latency and speeding up state access.
  • Parallel Execution: Although the acceleration effect of parallel EVM in existing workloads is limited, MegaETH optimizes the parallel execution engine and supports transaction priority management to ensure critical transactions are processed promptly during peak times.
  1. Interpreter Optimization:
  • AOT/JIT Compilation: MegaETH introduces Ahead-Of-Time (AOT) and Just-In-Time (JIT) compilation techniques to accelerate the execution of compute-intensive contracts. Although the performance improvements for most contracts in production environments are limited, these techniques can significantly enhance performance in specific high-compute scenarios.
  1. Status Synchronization Optimization:
  • Efficient Data Transmission: MegaETH designs an efficient state difference encoding and transmission method, capable of synchronizing large state updates with limited bandwidth.
  • Compression Technology: By adopting advanced compression techniques, MegaETH can synchronize state updates for complex transactions (like Uniswap swaps) within bandwidth constraints.
  1. State Root Update Optimization:
  • Optimized MPT Design: MegaETH employs an optimized Merkle Patricia Trie (such as NOMT) to reduce read/write operations and improve the efficiency of state root updates.
  • Batch Processing: By batch processing state updates, MegaETH can reduce random disk I/O operations and improve overall performance.

The above content is quite technical, but beyond these technical details, you can see that MegaETH truly has some technical prowess. And one clear motivation is:

By publicly sharing detailed technical data and test results, MegaETH aims to enhance the project’s transparency and credibility, allowing the technical community and potential users to gain a deeper understanding and trust in its system’s performance.

Prestigious Team, Freqeuntly Favored ?

While analyzing the whitepaper, it’s clear that despite MegaETH’s somewhat flashy name, the documents and explanations often reveal a meticulous and overly detailed technical nerdiness.

Public information indicates that MegaETH’s team appears to have a Chinese background. The CEO, Li Yilong, holds a Ph.D. in Computer Science from Stanford. The CTO, Yang Lei, holds a Ph.D. from MIT. The CBO (Chief Business Officer), Kong Shuyao, has an MBA from Harvard Business School and has experience working at several industry institutions (such as ConsenSys). The head of growth shares some career overlap with the CBO and also graduated from New York University.

A team where all four members come from top U.S. universities naturally wields significant influence in terms of connections and resources.

Previously, in the article Graduate as CEO, Pantera Leads $25 Million Round for Nexus, we introduced Nexus’s CEO, who, despite being a fresh graduate, also hails from Stanford and appears to have a solid technical background.

Top VCs indeed have a preference for top-tier technologists from prestigious schools. With Vitalik also investing and the project name including “ETH,” the technical narrative and marketing impact are likely to be maximized.

In the current climate, where old “king projects” become “fallen kings,” and there is a lull in new projects and market activity, MegaETH is poised to trigger a new wave of FOMO.

We will continue to monitor and provide updates on the project’s testnet and interactions.

statement:

  1. This article is reproduced from [techflow], the original title is “Interpretation of the MegaETH white paper: Infrastructure never sleeps, what is so special about the huge financing L2 that Vitalik participated in?”, the copyright belongs to the original author [深潮TechFlow], if you have any objection to the reprint, please contact Gate Learn Team, the team will handle it as soon as possible according to relevant procedures.

  2. Disclaimer: The views and opinions expressed in this article represent only the author’s personal views and do not constitute any investment advice.

  3. Other language versions of the article are translated by the Gate Learn team, not mentioned in Gate.io, the translated article may not be reproduced, distributed or plagiarized.

Start Now
Sign up and get a
$100
Voucher!