For measurement purposes, we built a custom tool that acts as a proxy in front of each JSON-RPC exposed by Execution Layer clients. This allows us to collect data independently of internal client metrics and accurately compare the performance of each EL.

We ran all clients on the Ethereum mainnet, setting up multiple identical VMs—one for each node—with Lodestar as the Consensus Layer client. We then monitored engine_NewPayload response times for each client.

Details

The tooling for measuring requests: https://github.com/NethermindEth/jrpc-interceptor

Hardware used

We used six identical machines with the following specifications:

Provider Akamai (formerly Linode)
CPU AMD EPYC 7713 64-Core Processor (32 vCPUs on single machine acquired)
Memory 64 GB
Storage 1.2 TB (NVMe), ~120k IOPS
More details Premium 64GB by Linode - Public Cloud Reference (cloud-mercato.com)

Client Versions

Execution Client Consensus Client
Node 1 Erigon 3.0.0-alpha2 Lodestar 1.20.2
Node 2 Besu 24.7.1 Lodestar 1.20.2
Node 3 Geth 1.14.8-unstable Lodestar 1.20.2
Node 4 Reth 1.0.3 Lodestar 1.20.2
Node 5 Nethermind 1.27.1 Lodestar 1.20.2
Node 6 Nethermind 1.28.0 RC Lodestar 1.20.2

Startup commands

MGas/s performance comparison

Below are the results of the comparison:

Untitled

To better visualize the results, we performed a percentage comparison using Geth as the baseline (100%), allowing us to see the performance differences between the other clients in percentage terms::