Chains
BNB Beacon Chain
BNB ecosystem’s staking & governance layer
Staking
Earn rewards by securing the network
Build
Explore
Accelerate
Connect
Careers🔥
Explore Opportunities on BNB Chain

The reth (Rust based client) is architected from the ground up with a singular, driving mandate: to be the most performant, modular, and state-of-the-art execution client in the EVM ecosystem. This pursuit of performance informs every component, from its parallelized sync stages to its database-agnostic state layer. Within this complex system, however, all activity eventually converges on a single, critical component: the Ethereum Virtual Machine (EVM).
The heart of reth's execution layer is revm, a highly-optimized, next-generation EVM implementation written in Rust. revm is, by many metrics, one of the fastest EVM interpreters in existence. It has achieved this through meticulous optimization of its core dispatch loop, efficient state handling, and Rust's low-level performance guarantees.
However, revm is approaching the asymptotic limit of what an interpreter-based model can achieve. This limitation is known as the "Interpreter's Performance Ceiling." An interpreter, by its very nature, operates as a "one-opcode-at-a-time" processor. Its core logic is fundamentally a large while loop containing a match (or switch) statement that dispatches to a corresponding Rust function for each EVM opcode (e.g., ADD, SLOAD, JUMP).
This architecture incurs two significant, non-reducible overheads:
revm is brilliant at minimizing this overhead, but it cannot eliminate it. The interpreter's fundamental "fetch-decode-execute" cycle, running on top of the host CPU's own fetch-decode-execute cycle, establishes a hard performance ceiling that no further optimization can break through.
The revmc project is the solution to this performance ceiling. It represents a fundamental paradigm shift for revm, moving it from a highly-optimized interpreter to a high-performance virtual machine runtime that utilizes compilation technology. This is not a mere optimization of the existing revm code; it is a replacement of its core execution engine.
revmc aims to accelerate EVM execution by transforming EVM bytecode—the language of smart contracts—directly into native machine code (e.g., x86_64, ARM64) that can be executed by the host CPU. This process, managed via Just-in-Time (JIT) or Ahead-of-Time (AOT) strategies, completely eliminates the interpreter's dispatch loop. E.g. Instead of emulating an ADD opcode, revmc generates the single native add instruction that the host CPU understands.

At BNB Chain, we aim to make a major strategic pivot for revm and reth. The next frontier for onchain performance lies in compilation. This is a significantly more complex engineering challenge, but it is one that offers non-linear performance gains, moving EVM execution from "emulated" speed to "native" speed
The revmc architecture is a sophisticated system composed of three primary components: the bridge for state communication, a dual JIT/AOT compiler engine, and the LLVM compiler backend. This section clarifies how these components interact and where performance benefits or overheads arise.
The single greatest challenge in a JIT compiler for a stateful VM is managing the boundary between the "unsafe," high-performance, compiled native code and the "safe," structured, host runtime. The high-level bridge that connects the compiled EVM code with the revm host's state management systems (like EVMData and the Journal).
The execution flow across this bridge is precise:

This architecture reveals a critical performance characteristic: the bridge acts as a context switch, and every crossing has a non-zero overhead. This creates a predictable performance model: the speedup provided by revmc will be inversely proportional to the number of stateful opcodes in a transaction.
Crucially, it is also the security boundary. The JIT-compiled code is, by definition, generated from untrusted user input (EVM bytecode). The bridge ensures this native code is sandboxed. It cannot directly access host memory, read the reth node's RAM, or interfere with the state of other transactions. It can only call the specific, safe, and heavily-validated functions exposed by the ABI.
revmc is designed as a sophisticated runtime, not just a simple compiler. It employs a dual-mode strategy to balance the trade-offs between performance and latency.
This is the default mode for handling "hot" or newly-seen contracts. The JIT flow is dynamic and happens at runtime:
The JIT model's primary flaw is the "warm-up problem." The first execution of a contract is now slower than the interpreter, because it must pay a one-time "JIT tax" (T_jit) for compilation.
AOT compilation is revmc's solution to the warm-up problem. In this mode, compilation does not happen at runtime.
Instead, reth can be shipped with a pre-compiled cache of the most popular contracts on mainnet (e.g., the USDC proxy, the WETH contract, the Pancake V3 Router).
This strategy is a hallmark of sophisticated VM design:
The AOT strategy effectively amortizes the compilation cost across all reth users, providing the raw speed of native execution (T_native) with the low-latency, "zero-warm-up" benefit of an interpreter.
The primary benchmarks are drawn from the test suite that provides a diverse mix of workloads. These workloads are designed to stress different aspects of the EVM, from pure computation (e.g., fibonacci) and memory operations (e.g., mload_mstore) to state-heavy operations (e.g., sstore_write). All comparisons are run on identical hardware, with revmc's performance measured both with and without its one-time JIT compilation cost.
The key metrics are:
The headline finding is that revmc is, in a wide variety of use cases, significantly faster than the interpreter, with some computation-heavy benchmarks demonstrating speedups of 6.9x.
However, the aggregate numbers hide a more nuanced and important story. The performance gains are not uniform; they are highly dependent on the type of work being performed, which validates the architectural model described in Section 2.
The following table provides a representative sample of these benchmark results:
Table 1: revmc vs. revm Interpreter: Core Execution Benchmarks
Repo: https://github.com/bnb-chain/revmc/tree/experimental/examples
Why results look like this
The revmc project represents the logical evolution of the revm execution environment for BNB Chain. The analysis confirms the following:
Once revmc is stable, reth will become the leading execution client for computationally-demanding tasks. For years, the EVM gas model has trained Solidity developers to think in a specific way: "computation is expensive, storage is (relatively) cheap." Developers are taught to avoid for loops and complex on-the-fly math. They are encouraged to use SSTORE to cache intermediate results to save gas on future calls.
revmc flips this entire model on its head.
In the revmc world, the wall-clock time cost is the inverse:
This will usher in an era of "JIT-aware" smart contracts. We will see the rise of new protocols that perform complex, in-memory calculations, data transformations, or cryptographic verifications on-the-fly, avoiding SSTORE operations that are now (in wall-clock time) the most expensive thing a contract can do. This unlocks entirely new onchain use cases that were previously "too expensive" from a computational standpoint.
The Endgame: AOT-as-a-Service
The JIT/AOT dual strategy is the path forward. The logical endgame for this technology is an "AOT-as-a-Service" maintained by the BNB reth team. This service would monitor BNB mainnet, automatically AOT-compile the top 10,000 most-used contracts for all supported hard forks, and distribute these signed, verified native binaries as part of the reth client. This would eliminate JIT overhead for 99%+ of all transactions, finally achieving the dream: the raw performance of native code with the zero-latency, zero-warm-up of an interpreter.