Minimmit
Thanks to Patrick O'Grady for feedback and review.
My first attempt at implementing a consensus engine end-to-end was Minimmit1. Compared to the protocols I've struggled to parse over the years, it's refreshingly simple (With love, I'm looking at you, Gasper2). This post is my attempt at an intuitive explanation of how it works, after spending the last month putting together an implementation at commonware.
What is Consensus, Anyways?
Say you have a group of machines that need to agree on a sequence of events. If every machine were honest and the network were perfectly reliable, this would be trivial: pick a leader, have it broadcast decisions, done. But in real systems, machines crash, networks partition, and some participants might actively try to sabotage things.
Byzantine Fault Tolerant (BFT) consensus is the art of getting a group of participants to agree on the same value, even when up to of them are Byzantine (they can behave arbitrarily). Crash, lie, send conflicting messages to different peers, go silent at the worst possible moment. Anything goes.
The two properties you need from a BFT consensus protocol:
- Safety: Honest participants never disagree. If one honest node decides the next block is , no other honest node will ever decide it's something else. This has to hold always, regardless of network conditions or adversary behavior.
- Liveness: The system eventually makes progress. You can't achieve much by having everyone sit still and do nothing. Blocks need to keep getting finalized, at least when the network is behaving reasonably.
Three-Round vs. Two-Round BFT
The standard BFT threshold is 3. The key idea is quorum intersection: any two quorums must share at least one honest node. That honest overlap preserves safety, because honest nodes do not vote for conflicting blocks in the same round. If the overlap could be entirely Byzantine, the adversary could sign both sides and form conflicting certificates. With quorum size in a system of , any two quorums intersect in at least members. Since at most are Byzantine, at least one overlapping member is honest. But this still requires two rounds of voting to finalize. Consider what a Byzantine leader can do: it can send different proposals to different nodes, splitting honest votes roughly in half. Now the protocol is in a tough spot:
- If you require votes to finalize, a split view never finalizes and the Byzantine leader stalls the chain.
- If you try to continue with whichever proposal has the most votes, the Byzantine nodes can lend their votes strategically, temporarily boosting one proposal while retaining the ability to finalize the other.
This is the idea behind the prepare/commit structure introduced in PBFT3. The first voting round "locks" a value (prepare), which prevents honest nodes from voting for incompatible blocks in later views, and the second round confirms it (commit). Three communication rounds (propose + two votes) is provably optimal for PBFT-style Byzantine broadcast4 at the threshold.
Minimmit drops to a single voting round by using two different quorum thresholds on the same vote. A block that receives votes achieves an M-notarization (mini notarization), which lets the view advance quickly. A block that receives votes achieves an L-notarization (large notarization), which is equivalent to finalization. The problem is that these two thresholds need enough room between them to prevent contradictions. If a block is finalized (L-notarized), a conflicting block must not be able to reach even M-notarization, and the view must not be nullifiable. Making those guarantees work with just one round of votes requires extra headroom in Minimmit's liveness argument, so we follow the paper's simplifying assumption .5
The trade is that you need more nodes to tolerate the same number of Byzantine faults. In practice, though, large-scale systems already have hundreds or thousands of validators, so tolerating Byzantine out of (20%) rather than (33%) is often acceptable, especially when the payoff is cutting a full communication round from every view (faster finalization + ~50% less messages!) Minimmit, and other two-round protocols like it, are optimized for speed 🏎️
Safety
The two safety invariants hold because of how the quorum sizes interact with .
X1: Uniqueness
If block receives an L-notarization ( notarize votes), can a conflicting block also reach M-notarization ( notarize votes) in the same view? Count the overlap:
Since at most processors are Byzantine, at least one honest processor must be in both sets. But an honest processor only votes once per view! So, if one block can be finalized, no conflicting block can even reach the mini notarization threshold.
X2: No Nullification
If block gets an L-notarization, can the view still be nullified? Among the voters, at least are correct (subtracting the at most Byzantine). These correct voters will never send a nullify message. They already voted for , and they can't be triggered into evidence-nullifying since their honest peers also voted for . That leaves at most processors (the non-voters plus at most Byzantine voters) who could possibly nullify. But nullification requires nullify votes, so there aren't enough. A finalized view is locked.
Liveness
The above rules show us that honest participants can never disagree. But a protocol that never decides anything is trivially safe, and also completely useless. Consensus must always be able to meaningfully progress.
M-Notarization as a Signal
An M-notarization is not a finalization. It's a signal to the next proposer: "it's safe to build on this block." By X1, no other block in the same view can be finalized once one proposal has an M-notarization.
A view can have multiple M-notarizations if a Byzantine leader equivocates, sending different proposals to different nodes. None of them can reach finalization, because honest nodes only vote once.
View Advancement
Minimmit operates in the partial synchrony model: after some unknown point in time called GST (Global Stabilization Time), all messages between honest nodes arrive within a known bound . Before GST, the network can be arbitrarily unreliable - messages can be delayed for a long (and unpredictable) time and reordered. Nodes don't know when GST occurs, but they know and use it to set timeouts.
A node advances to the next view when it sees either an M-notarization (progress) or a nullification ( nullify votes).
An honest node sends a nullify message in two cases:
- Timeout: It waited after entering the view without voting. The budget accounts for up to for the leader's proposal to arrive, plus for any forwarded M-notarization or nullification from the previous view to arrive. If nothing shows up in that window, the leader is presumed faulty.
- Evidence-nullify: It voted for block , then saw messages that are either nullify votes or votes for a different block. This proves can't be finalized (it can never reach votes), so nullifying is safe.
Evidence-nullify is what makes the protocol unstoppable. Even if a Byzantine leader splits votes, honest nodes discover the split and nullify their way out.
When a new leader takes over at view , it looks back through recent views to find the latest M-notarized block. It proposes a child of that block, along with proof: the M-notarization for the parent and nullifications showing that every view in between produced no viable block.
Why Honest+Online Leaders Succeed
If the leader is honest and the network is synchronous, their block will be finalized. All honest nodes receive the proposal within and vote. One subtlety: some nodes may see the M-notarization before they've voted themselves. Minimmit handles this by requiring a node that sees an M-notarization for a block it hasn't voted on yet to vote before advancing. This ensures all nodes that have not yet voted still vote, so the block reaches the threshold and gets an L-notarization.
A chain of views showing competing M-notarizations (view 3), nullified views, and how honest proposers pick the highest M-notarized block as parent.
Implementation
At commonware, I've been working on an implementation of this protocol for the past month, as well as integrating it into the surrounding infrastructure. While I'm still smoothing out the edges, it works!
In a testnet with 50 participants across 10 AWS regions (US West, US East, South America, EU West, EU North, EU Central, AP South, AP Northeast 1 & 2, AP Southeast 2), view latency averaged ~146ms and finalization latency ~269ms. The simulation below shows6 round-robin leader rotation across these regions, with blocks turning green once finalized. Each view produces a block, and finalization follows roughly one to two views later.
| view | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | status |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | L | N | N |
Footnotes
-
Chou, B. K., Lewis-Pye, A., & O’Grady, P. (2025). Minimmit: Fast Finality with Even Faster Blocks. ↩
-
Buterin, V., Hernandez, D., Kamphefner, T., Pham, K., Qiao, Z., Ryan, D., Sin, J., Wang, Y., Zhang, Y. X. (2020). Combining GHOST and Casper. ↩
-
Castro, M., & Liskov, B. (1999). Practical Byzantine Fault Tolerance. ↩ ↩2
-
Abraham, I., Nayak, K., Ren, L., & Xiang, Z. (2021). Good-case Latency of Byzantine Broadcast: A Complete Categorization. ↩
-
In general, the tight 2-round finality bound is (Kuznetsov, P., Tonkikh, A., & Zhang, Y. X., 2021, Revisiting optimal resilience of fast byzantine consensus, PODC). The Minimmit paper adopts for simplicity in its formal setup and analysis. ↩
-
Simulation UI inspired by https://x.com/0xBaltar/status/2023117736071573725 ↩