Chains
BNB Beacon Chain
BNB ecosystem’s staking & governance layer
Staking
Earn rewards by securing the network
Build
Explore
Accelerate
Connect
Careers🔥
Explore Opportunities on BNB Chain
BSC processes billions of transactions, and many of them look alike. Around 60% of mainnet transactions are repetitive DeFi interactions, such as swaps on PancakeSwap. These patterns cause the EVM to re-run identical instruction fragments across the network, over and over.
This repetition leads to wasted cycles: the interpreter dispatches the same sequences repeatedly, shuffling the stack in the same way each time.
Super-Instructions are BSC’s answer. By replacing hot bytecode sequences with a single custom opcode, execution becomes more efficient. Instead of executing ten steps, the interpreter executes one atomic operation.
For developers, auditors, and node operators, the benefits are concrete:
Importantly, super-instructions are consensus neutral. On-chain bytecode remains unchanged; optimizations happen only during local execution.
BSC identifies high-frequency bytecode sequences from mainnet traffic, scoring them by (length × frequency) and resolving overlaps using a greedy graph-based selection.
For builders, this means contracts run faster on BSC without any changes to source code or deployment. For node operators, blocks sync more quickly and use fewer resources.
Super-instructions are not new in computer science, but BSC’s approach combines two innovations:
Together, these improvements mean optimizations are practical, safe, and beneficial at BSC scale.
BSC implements them through two main components:
The optimization engine can be switched on or off at runtime. When off, the node behaves like unmodified Geth.
A control-flow graph (CFG) breaks contract bytecode into basic blocks, filtering out dead code and metadata. This guarantees correctness by excluding any dead code or metadata that could otherwise interfere with the optimization process.
Optimized bytecode matches the original in length and offsets, guaranteeing valid control flow. Results are cached for future use, so popular contracts run faster with negligible overhead. Auxiliary artefacts such as jump analysis bit-vectors are cached separately for reuse.
Optimized contracts are generated once, then reused across future executions, making performance improvements cumulative over time.
Before execution, the EVM checks the cache:
If an optimized version exists, the contract is marked optimized before execution.
The interpreter runs normally, but when it encounters a super-instruction byte, it calls a single handler instead of multiple standard ones.
For example, a super-instruction may represent a full sequence such as: AND DUP2 ADD SWAP1 DUP2 LT.
The system is designed to fail safe: if anything goes wrong, execution automatically falls back to the original bytecode.
BSC doesn’t manually define super-instructions. Instead, they are mined from real transaction data.
The process:
When overlaps occur, scores are adjusted. For example, if Super-instruction A = (a, b, c) and B = (a, b, c, d, e), then selecting B reduces the frequency of A. Conversely, selecting A reduces the effective length of B to only (d, e), lowering its score. This ensures the Top-K selected instructions are non-redundant.
This ensures the instruction set evolves naturally with network usage, always targeting the most impactful patterns.
Benefit: Builders, auditors, and node operators can rely on stability while enjoying efficiency gains.
Benchmarks show tangible improvements under real workloads.
Sync Performance Results:
In the 24-hour mixed transaction sync test, blocks synced in 494 minutes compared to 575 minutes baseline, representing a 14% improvement.
Faster block execution means smoother syncing, higher throughput, and improved scalability for the entire ecosystem.
Super-instructions make BSC execution faster, safer, and more efficient—without changing consensus or requiring developers to modify contracts.
These optimizations introduce no additional maintenance overhead for client operators and have zero impact on consensus.
Key benefits:
Looking forward, the pipeline will be rerun periodically to adapt to new compiler output and application trends, ensuring that BSC continues to deliver efficiency gains where they matter most.
The key is that the optimization is a local execution speed-up that is semantically identical to the original bytecode. The on-chain bytecode remains the canonical version. Every super-instruction produces the exact same change to the EVM state (stack, memory, storage) as the original sequence of bytecodes it replaces. Therefore, the resulting state root of a block will be identical whether the node used the optimization or not. If a bug were to cause a semantic divergence, the node would fail to agree on the block's state root and fall out of sync, a failure that is self-contained.
The primary risk is a bug in the fusion engine or the super-instruction handler that causes incorrect execution. The document outlines several safety measures:
This process is not static. To keep the optimization effective, the pipeline would need to be re-run periodically on up-to-date mainnet data. This would allow the BSC to:
Adjusting all jump offsets in a contract would be a much more complex and fragile operation.
By using NOPs, the system preserves the original length and all offsets, making the transformation simple, robust, and transparent to external tooling. The super-instruction handler simply advances the program counter over the NOPs in a single step.
Website | Twitter | Telegram | Facebook | dApp Store | YouTube | Discord | LinkedIn | Build N' Build Forum