Experimental: The Krakatoa mempool is an upcoming feature and is subject to change.
Overview
CometBFT’s Krakatoa specification introduces a newapp mempool type that delegates transaction storage, validation, and rechecking to the application via two new ABCI methods: InsertTx and ReapTxs. This page documents Cosmos EVM’s implementation of that specification.
The ExperimentalEVMMempool is the application-side mempool that fulfills the Krakatoa contract. It manages two sub-pools — an EVM LegacyPool and a Cosmos RecheckMempool — handling transaction insertion, nonce gap queuing, application-side rechecking, and reaping for peer broadcast, all without relying on CometBFT for mempool state.
For details on the CometBFT-side changes — the
AppMempool, AppReactor, new ABCI methods, their guarantees, and how to enable your application to take advantage of Krakatoa — see the CometBFT Krakatoa documentation.Key Features
- Krakatoa ABCI integration: Implements
InsertTxandReapTxshandlers, enabling all the advantages of the Krakatoa specification — no ABCI lock contention, application-owned mempool state, and decoupled transaction gossip- Insert queue: Fully async insertion via background queues, allowing
InsertTxto return immediately without blocking on pool validation - Reap list: A staging area that tracks validated transactions and manages which are returned to CometBFT via
ReapTxsfor peer broadcast
- Insert queue: Fully async insertion via background queues, allowing
- Application-side rechecking: Transaction revalidation runs within the application after each block, replacing CometBFT’s
CheckTx-driven recheck pass and giving the application full control over recheck timing and cancellation - Non-blocking transaction collection: A height-aware transaction store allows block builders to collect rechecked transactions as they become available, without waiting for a full recheck pass to complete — enabling proposals to be built from partial recheck results when block time is tight
Krakatoa ABCI Integration
TheExperimentalEVMMempool registers InsertTx and ReapTxs handlers on BaseApp. Together, these two handlers form the core loop: CometBFT pushes transactions in via InsertTx, and pulls validated transactions back out via ReapTxs for peer broadcast.
InsertTx and the Insert Queue
Instead of validating transactions synchronously inside theInsertTx ABCI call, the mempool pushes them onto insert queues — one for EVM, one for Cosmos. A background goroutine drains each queue, batches the pending items, and inserts them into the underlying pool.
Each queued item carries a subscription channel that delivers the insertion result. This supports two calling patterns:
- Local RPC (
Insert): Blocks on the channel, so the caller gets the real result (accepted, queued with nonce gap, or rejected) - P2P / ABCI (
InsertAsync): Returns immediately — gossip-sourced transactions don’t need to wait
ABCI handler wiring
The ABCI handler wires this up inevmd:
ReapTxs and the Reap List
The reap list tracks transactions that are ready for broadcast. It replaces the oldBroadcastTxFn callback pattern — instead of the application pushing transactions to CometBFT, CometBFT pulls them via ReapTxs.
Adding transactions
When a transaction enters the reap list depends on its type:| Transaction type | Added to reap list | Why |
|---|---|---|
| EVM | On promotion to pending (OnTxPromoted) | Nonce-gapped transactions stay local until their dependencies are satisfied |
| Cosmos | Immediately after pool insertion | No queued/pending distinction — ready for broadcast once validated |
Removing transactions
Transactions leave the reap list in two ways:| Path | Trigger | Behavior |
|---|---|---|
| Reap (normal) | AppReactor calls ReapTxs | Drained in FIFO order. Hash stays in the index as a guard to prevent double-broadcasting. |
| Drop (invalidation) | OnTxRemoved (EVM) or removeCosmosTx (Cosmos) | Fully removed from slice and index — if the tx is re-gossiped later, it can re-enter the reap list. |
ABCI handler wiring
TheReapTxsHandler drains the reap list and returns the encoded bytes:
Application-Side Rechecking
With Krakatoa, the application rechecks its own mempool after each block — CometBFT no longer drives this viaCheckTx. Both sub-pools use a Rechecker to assist with this that wraps an sdk.Context and the ante handler, threading state forward as each transaction is validated.
EVM Rechecking
On a new block, the LegacyPool runs a two-phase recheck:promoteExecutables— promotes queued transactions that are now executable. Rechecking does not write state back during a block reset, since these will be re-validated next.demoteUnexecutables— validates all pending transactions on a fresh context, writing state back to the recheck context after each success.
promoteExecutables does write state back to the recheck context — it builds on the context demoteUnexecutables already established.
Cosmos Rechecking
Transactions are rechecked inSelect order, committing state forward. Failed transactions — and dependents from the same account — are removed.
New inserts must validate against post-recheck state, so the pool holds an exclusive lock for the full recheck pass.
Cancellation
If a new block arrives mid-recheck, the current pass is cancelled and restarted against the new state. Both sub-pools check a cancellation channel between transactions — no partial results are committed.Non-Blocking Transaction Collection
Rechecking holds the pool lock, but since Krakatoa removes the ABCI lock,PrepareProposal can arrive mid-recheck. We can’t block the proposer waiting for a full recheck to finish.
Both pools solve this with a HeightSync store — as each transaction passes recheck, it’s pushed into a lock-free, height-indexed store. The proposer reads from this store directly:
- Recheck complete → proposer gets the full set immediately
- Recheck in progress → proposer waits up to a configurable timeout, then takes whatever’s been validated so far
Address Reservation
Previously, an account could have transactions in both the EVM and Cosmos pools simultaneously. With Krakatoa, this is no longer allowed — a single account can only have transactions in one pool at a time. Each pool gets its ownReservationHandle with a unique ID. The handle’s Hold method is idempotent within the same pool (holding an already-held address is a no-op) but fails across pools.
Why
Each pool needs to recheck, insert, and invalidate transactions independently — without consulting the other pool’s state. By guaranteeing an address only exists in one pool, each pool can validate and evict transactions in isolation, with no cross-pool coordination required.Transaction Lifecycle Diagrams
The following diagrams trace a nonce-gapped EVM transaction through its full lifecycle — from submission through rechecking across multiple blocks. Scenario: Account sends tx with nonce 5, but the pool expects nonce 4. The tx is queued until nonce 4 arrives in a later block.Local RPC Submission (eth_sendRawTransaction)
Local RPC Submission — Cosmos tx (BroadcastTxSync)
Related Documentation
- CometBFT Krakatoa - AppMempool, ABCI methods, and CometBFT-side architecture
- Mempool - Current mempool architecture and configuration
- Transactions - Transaction types and lifecycle
- Gas and Fees - Fee calculation and EIP-1559 integration