Sequences = Aggregator + Header Producer

AdvancedFeb 28, 2024
This article, for the purpose of making the Rollup model easier to understand and analyze, Celestia researcher NashQ has divided the sequencer of Rollup into two logical entities - the aggregator and the header producer. At the same time, he divided the transaction ordering process into three logical steps: inclusion, ordering, and execution. Guided by this analytical thinking, the six major important variants of Sovereign Rollup are clearer and easier to understand. NashQ has detailed discussions on the censorship resistance and liveness of different Rollup variants, and also explored the minimum configuration of each Rollup variant node in the state of minimal trust (that is, to achieve a Trustless state, at least what types of nodes Rollup users need to run).
Sequences = Aggregator + Header Producer
  • Forwarded original title: Celestia researcher analyzes 6 Rollup variants: Sequencer=aggregator+Header generator

Note: For the purpose of making the Rollup model easier to understand and analyze, Celestia researcher NashQ divided the Rollup Sequencer into two logical entities - the Aggregator and the Header Producer. At the same time, he divided the transaction ordering process into three logical steps: inclusion, ordering, and execution.

Guided by this analytical thinking, the six major variants of Sovereign Rollup are clearer and easier to understand. NashQ discussed in detail the censorship resistance and liveness of different Rollup variants, as well as the minimum configuration of each Rollup variant node in the state of trust minimization (i.e., to achieve a trustless state, at least what types of nodes Rollup users need to run).

Although this article analyzes Rollup from the perspective of Celestia, which is different from the way the Ethereum community analyzes the Rollup model, considering the many interconnections between Ethereum Rollup and Celestia Sovereign Rollup, as well as the latter’s increasing influence, this article is also extremely worth reading for Ethereum enthusiasts.

What is a Rollup?

Rollups are blockchains that post their transaction data to another blockchain and inherit its consensus and data availability.

Why am I changing “block” to “transaction data” here? Let me tell you the difference between a rollup block and rollup data and show you that the minimal rollup only needs rollup data with our first variant.

A rollup block is a data structure representing the blockchain at a certain height. It consists of rollup data and rollup header. And Rollup data is either a batch of transactions or the state difference between transaction batches.

Variant 1 : Based Pessimistic Rollup

The simplest way to build a rollup starts with a user posting transactions to another blockchain. We will call this blockchain the consensus and data availability layer, but I will shorten it to DA-Layer in all the following diagrams. (Note: similar to Layer1 often referred to in the Ethereum community).

In our first variant every rollup node must replay all transactions on the blockchain to check the newest state. We just created a pessimistic Rollup!

A pessimistic rollup is a rollup that only supports full nodes that replay all the transactions in the rollup to check its validity.

But in this case, who is the sequencer in this case? No entity is actually executing the transactions apart from the rollup full nodes themselves. Usually, a sequencer would aggregate the transactions and produce a rollup header, but there is no header in this case!

To facilitate discussion,we split the sequencer into two logical entities: the aggregator and the header producer differentiates it. One entity must be state aware (i.e., must execute the state to compute the header), but the aggregator does not need to understand the state to be able to aggregate it.

Sequencing is the process of aggregation and header production.

Aggregation is the process of batching transactions into one batch. A batch of transactions consists of one or more transactions. (Note: Batch is the part of the data in the Rollup block except the Header).

Header production is the process of creating the rollup header.

Rollup Header is metadata about the block, which at minimum, includes a commitment to the transactions in that block. (Note: The commitment here refers to the commitment to the correctness of the transaction processing results).

Through the above perspective, we can see who plays the role of each component of Rollup. Let’s first look at the aggregator part. The pessimistic rollup mentioned earlier does not have a header producing process, and users publish transactions directly to the DA-Layer, which means that the DA-Layer network essentially acts as an aggregator.

A Pessimistic Rollup is a variant of Rollup that delegates the aggregation step to the DA-Layer. It does not have a sequencer. Sometimes this type of rollup is called “based rollup”.

Based Rollup has the same censorship resistance and liveness as the DA-Layer (activity measures the system’s response speed to user requests). If users of this type of Rollup want to achieve a state of minimal trust (closest to Trustless), they must run at least a DA-Layer light node and a rollup full node.

Variant 2: Pessimistic Rollup using a Shared Aggregator

Let’s discuss pessimistic aggregation using shared aggregators. This idea was proposed by Evan Forbes in his forum post on shared sequencer design. ThatThe key assumption is that a shared sequencer is the only formal way to sequence transactions.Evan explains the benefits of shared sequencers like this:

“To unlock a web2 equivalent UX, the shared sequencers […] can provide fast soft commitments (note: not a very reliable guarantee). These soft commitments provide some arbitrary promise of the final ordering of transactions (that is, they promise that the transaction order will not change), and can be used to create prematurely updated versions of the state (but the finalization has not yet been completed at this time).

As soon as the blockdata has been confirmed to be posted on the baselayer (s/b referring to DAlayer), the state can be considered final.”

Because we are still a pessimistic rollup, we only have rollup full nodes and no light nodes. Each node has to execute all transactions to guarantee validity. There are no light nodes in this system, so there’s no need for a rollup header, aka header producer. (Note: Generally speaking, a light node of a blockchain does not need to synchronize complete blocks, but only receives block headers)

Since there is no Rollup Header producing step, the above-mentioned Rollup shared sequencer does not need to execute transactions for status updates (a prerequisite for Header production), but only includes the process of aggregating transaction data. So I prefer to call it a shared aggregator.

In this variant, Rollup users need to run at least the following in a trust-minimized state:

DA-Layer light node + light node of shared aggregator network + Rollup full node.

At this time, it is necessary to verify the published aggregator header (not referring to the Rollup Header) through the light node of the shared aggregator network. As mentioned above, the shared aggregator undertakes the task of transaction sorting. In the published aggregator header, it contains a cryptographic commitment, corresponding to the Batch it published on the DA-Layer.

In this way, the operator of the Rollup node can confirm that the Batch received from the DA-Layer was created by the shared aggregator, not by others.

(Because the content contained above is relatively obscure, you can read the diagram again)

Inclusion is the process by which a transaction is accepted into the blockchain.

Ordering is the process of arranging transactions in a specific sequence in the blockchain.

Execution is the process by which the transactions in the blockchain are processed, and their effects are applied to the state of the blockchain.

Because the shared aggregator controls inclusion and ordering, we inherit its censorship resistance.

If we assume L_ss is the liveness of the shared aggregator, and L_da is the liveness of the DA-Layer, then the liveness of this scheme is L = L_da && L_ss. In other words, if either system has a liveness failure, the rollup also has a liveness failure.

For simplicity, I will use liveness as a boolean. If the shared aggregator fails, we can’t proceed with the rollup. If the DA-Layer fails, we could continue with the shared aggregator’s soft commitments. Still, we’d be relying on the shared aggregator’s consensus and data availability, which would be worse than the original DA-Layer.

Let’s continue to explore the censorship resistance of the above Rollup solution:

In this scheme, the DA-Layer cannot censor specific transactions. (Note: Transaction review can often refuse to allow certain transactions to be uploaded to the chain). It can only censor whole rollup batches that the shared aggregator already aggregated. (rejecting a batch to be included in the DA-Layer).

However, according to the workflow of Rollup, when the sharing aggregator submits the transaction batch to the DA-Layer, it has already completed the transaction sequencing, and the order between different batches has also been determined. Therefore, this kind of transaction review by the DA-Layer has no other effect except delaying the finality of Rollup’s ledger.

To sum up, I believe that the focus of censorship resistance is to ensure that no one entity can control or manipulate the flow of information within the system, while liveness involves maintaining the functionality and availability of the system, even in the presence of network outages and adversarial actions. Although this conflicts with the current mainstream academic definition, I will still use the definition of the concept that I have stated.

Variant 3: Pessimistic Rollup with Based and Shared Aggregation

Even though the community enjoys the benefits of a shared aggregator, we want to avoid depending on it and want to have a fallback to the DA-Layer. We will combine the ordering and allow users to submit transactions directly to the DA-Layer. It combines based and shared aggregation.

We assume the final ordering will be interpreted as all transactions ordered by the shared aggregator and then all based transactions after that per DA-Layer block. We call this the rollups fork choice rule.

Aggregation is a two-step process here. First, the shared aggregator takes the lead, aggregating some transactions. Then, the DA-Layer aggregates with the already ordered batches and transactions the user directly submitted.

Censorship resistance analysis now is more complex. The DA-Layer network node may review the Batch submitted by the shared aggregator before the next DA-Layer block is produced. After knowing the transaction data in the Batch, the DA-Layer node can extract MEV value, initiate a front-running transaction with its account on the Rollup network, and include it in the DA-Layer block before including the Batch submitted by the Rollup shared aggregator.

Apparently, the transaction order finality guaranteed by the soft commitment of the third type of Rollup variant is more fragile than the second type of Rollup variant mentioned above. In this case, the shared aggregator hands over the MEV value to the DA-Layer node. Regarding this, I suggest readers watch the lecture about the profitable censorship of MEV.

At present, some design solutions have appeared to reduce the ability of DA-Layer network nodes to execute such MEV transactions, such as the “reorganization window period” function, which will delay the transactions submitted directly by Rollup network users to the DA-Layer. Sovereign Labs has detailed this in its design proposal named Based Sequencing with Soft Confirmations, where the concept of “preferred sequencer” is proposed.

As MEV depends on the aggregator scheme you choose and the fork choice rule of the rollup, some will leak none, and some will leak some or all MEV to the DA-Layer, but that’s a topic for another day.

As for liveness, this rollup design has a leg up over just having a shared aggregator. If the shared aggregator has a liveness failure, the user can still submit transactions to the DA-Layer.

Finally, let’s talk about the smallest trust-minimized setup: a DA-Layer light node + shared aggregator light node + rollup full node.

At this time, we still need to validate the headers of the shared aggregator for our rollup full node to be able to differentiate transaction batches for its fork choice rule.

Variant 4: Based Optimistic Rollup with a Centralized Header Producer

Let’s start cooking some light nodes using a variant called the based optimistic rollup with a centralized header producer. This design uses the DA-Layer to aggregate transactions, but we introduce a centralized header producer to enable rollup light nodes.

Rollup light nodes can indirectly verify the validity of Rollup transactions through a single round of fraud proof. The light node will take an optimistic attitude towards the generator of the Rollup Header, and make a final confirmation after the fraud proof window period ends. Another possibility is that it receives a fraud proof from an honest full node, knowing that the Header generator has submitted incorrect data.

I will not go into detail about how single-round fraud proofs work, as this would break the scope of this article. The benefit here is that you can reduce the fraud proof window time from 7 days to some amount, which is yet to be determined but magnitudes smaller. Light nodes are able to receive fraud proofs through the p2p layer without needing to wait for a dispute, as everything is captured in a single proof.

We use the DA-Layer as the aggregator inheriting its censorship resistance. It does inclusion and ordering. The centralized header producer will read the canonical order from the DA-Layer and be able to construct a valid header from that. The centralized header producer will post the header and state roots to the DA-Layer. These state roots are essential to create a fraud proof against this commitment. The aggregator does inclusion and ordering, while the header producer does the execution.

It is assumed that the DA-Layer (which also acts as Rollup’s aggregator at this time) is sufficiently decentralized and has good censorship resistance. In addition, the Header producer cannot change the Rollup transaction sequence published by the aggregator. Now, if the Header producer is decentralized, the only benefit is better liveness, but the other properties of Rollup are the same as the first variant, Based Rollup.

If the header producer has a liveness failure, the rollup also has a liveness failure. The light node won’t be able to follow the chain, while full nodes could still follow the chain if it is desirable, falling back to a based pessimistic rollup as described in variant 1. Obviously,The minimum configuration for trust minimization described in variant 4 is:

DA-Layer light node + Rollup light node.

Variant 5 : Based ZK-Rollup with a Decentralized Prover Market

We have discussed Pessimistic Rollup (Based Rollup) and Optimistic Rollup, now it is time to consider ZK-Rollup. Recently Toghrul made a presentation on the separation of aggregator (Sequencer) and Header producer (Prover) (Sequencer-Prover Separation in Zero-Knowledge Rollups). In this model, publishing transactions as Rollup data rather than State Diff is easier to deal with, so I will focus on the former. Variant 5 is a based zk-rollup with a decentralized prover market.

By now, you should be familiar with what a based rollup does. Variant 5 delegates the aggregator role to the DA-Layer nodes, which perform the work of inclusion and ordering.I will quote from Sovereign-Labs’ doc which does an amazing job of explaining the lifecycle of their design. I will adapt it slightly so it will fit the variant 5.

Users post a new blob of data onto the L1 chain. As soon as the blob is finalized on L1, it is logically final. Immediately after the L1 block is finalized, full nodes of the rollup scan through it and process all relevant data blobs in the order that they appear, generating a new rollup state root. At this point, the block is subjectively finalized from the perspective of all full nodes.

Our header producer in this design is the decentralized prover market.

Prover nodes (full nodes running inside a ZKVM) perform roughly the same process as full nodes — scanning through the DA block and processing all of the batches in order — producing proofs and posting them on chain. (Proofs need to be posted on chain if the rollup wants to incentivize provers — otherwise, it’s impossible to tell which prover was first to process a given batch). Once a proof for a given batch has been posted on chain, the batch is subjectively final to all nodes, including light nodes.

(Because there are many concepts involved, you can look at the schematic diagram again)

Variant 5 enjoys the same censorship resistance as the DA-Layer. The decentralized prover market cannot censor transactions because the DA-Layer determines the canonical ordering. We decentralized the header producer only for better liveness and to create an incentives market. The liveness here is L = L_da && L_pm (prover market). If the incentives of the prover market are misaligned, or it has a liveness failure, light nodes won’t be able to follow the chain, but rollup full nodes could still follow the chain if desirable, falling back to a based pessimistic rollup. The smallest trust-minimized setup is here, just as in the optimistic case having a DA-Layer light node + a rollup light node.

Variant 6 : Based Hybrid Rollup with a Centralized Optimistic Header Producer and Decentralized Prover

We still let the DA-Layer nodes act as aggregators for Rollup, and delegate them to handle the inclusion and sorting of transactions.

As you can see from the figure below, both ZK Rollup and Optimistic Rollup use the same ordered transaction batches on the DA-Layer as the source of the Rollup ledger. This is why we can use two proof systems at the same time: the ordered transaction batches on the DA-Layer are not affected by the proof system itself.

Let’s talk about finality. From the perspective of a rollup full node, the rollup is final when the DA-Layer is final, as it just needs to execute the transactions for this variant. But we care more about the finality of the light node. Let’s assume the centralized header producer puts up some stake, signs over a header, and posts the calculated state roots to the DA-Layer.

As with the previous variant 4, the light nodes will optimistically trust the execution and wait for a fraud proof from an honest full node that shows that the header producer committed fraud. After the fraud proof window is over the rollup block is final from the perspective of a rollup light node.

The key point is that if we can get a ZK proof, we no longer have to wait for the fraud proof window to end. In addition to single-round fraud proofs, we can replace fraud proofs with ZK proofs and dismiss any malicious header that was generated from the optimistic header producer!

When the light node receives the ZK certificate corresponding to a certain Rollup transaction batch, the batch will be finalized.

Now we have a fast soft commitment and a fast finality.

Variant 6 still enjoys the same censorship resistance as the DA-Layer as it is based. For liveness, we will have L = L_da && (L_op || L_pm ), which means we increased our liveness guarantee. If either the centralized header producer or prover market have a liveness failure, we can fall back to the other scheme.

The smallest trust-minimized setup is a DA-Layer light node + a rollup light node.

Summary:

  1. We split the sequencer into two logical entities: the aggregator and the header producer

  2. We split the sequencer into three logical processes: inclusion, ordering, and execution.

  3. Pessimistic rollups and based rollups are a thing

  4. Depending on your needs, you can plug-and-play aggregators and header producers.

  5. Each Rollup variant in this article followed this design pattern:

Finally, I have some thoughts. Please think about:

  • How do classic rollups fit into this? (referring to Ethereum Rollup)
  • In all the variants, we only made the aggregator do inclusion + ordering and the header producer execution. What if the aggregator only does inclusion and the header producer does ordering and execution? Think on-chain auctions. Could we separate all three?
  • What is a shared header producer / header producer market?
  • Who gets the MEV? And can we get it back?

Disclaimer:

  1. This article is reprinted from [Geek Web3], Forward the Original Title‘Celestia researcher analyzes 6 Rollup variants: Sequencer = aggregator + Header generator’,All copyrights belong to the original author [NashQ, Celestia Researcher]. If there are objections to this reprint, please contact the Gate Learn team, and they will handle it promptly.
  2. Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
  3. Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.
Start Now
Sign up and get a
$100
Voucher!
Create Account