Benchmarks

MicroRaft includes a JMH benchmark suite for the core module. Use it to evaluate Java Raft throughput and latency tradeoffs under a controlled setup, not to make context-free production claims.

Run the benchmark suite

./gradlew benchmark

Make sure Java 11 is installed before running the suite locally.

Run witness overhead benchmark

./gradlew :microraft:jmh --args="WitnessReplicaOverheadBenchmark"

This scenario compares local state machine execution and snapshot overhead with the witness fast path.

What these Java Raft benchmarks are for

  • detecting regressions before they reach production workloads
  • comparing implementation changes inside the same controlled environment
  • measuring Java Raft throughput and latency tradeoffs under an explicit scenario

What these numbers do not prove

  • they do not describe every Java Raft deployment or every control-plane workload
  • they are not independent of storage, transport, command shape, CPU, memory, or network assumptions
  • they do not replace workload-specific soak tests, failure tests, or rollout rehearsals

Publish benchmark results responsibly

When sharing performance claims, include:

  • the exact benchmark command and git revision
  • JDK version and GC/runtime settings
  • hardware profile and machine isolation assumptions
  • scenario details, workload shape, and storage/network setup

How to read the output

Look for trend changes first. If a benchmark is slower, ask whether the change bought safer durability, better batching, or more predictable failure behavior before treating it as a regression.