Performance

Understanding and Minimizing Solana RPC Latency

November 23, 2025AllenHark Team

"Latency" is often treated as a buzzword, but in the world of Solana high-frequency trading (HFT), it is a physics problem. When a block time is 400ms, a 50ms delay represents 12.5% of the entire opportunity window. If you are 50ms late, you aren't just slow—you are irrelevant.

This article breaks down the four distinct phases of RPC latency and provides actionable engineering strategies to eliminate them.

The Latency Stack

When you call sendTransaction, time is lost in four distinct phases. Understanding this stack is key to optimization.

1. Network Latency (The Speed of Light)

This is the time it takes for your packet to travel from your server to the RPC node. It is purely a function of distance and fiber optics.

  • NYC to London: ~70ms
  • Tokyo to Singapore: ~35ms
  • Same Datacenter: Under 1ms

The Fix: Co-Location. If the RPC is in Frankfurt, your bot must be in Frankfurt. There is no software optimization that can beat the speed of light. Hosting your bot in the same rack as the RPC node reduces this latency to near zero.

2. Processing Latency (Serialization & Validation)

Once the RPC node receives your request, it must:

  1. Deserialize the JSON-RPC payload.
  2. Base58 decode the transaction.
  3. Verify the Ed25519 signature.
  4. Check the blockhash against the blockhash cache.
  • Public RPCs: Can take 20-100ms under load due to CPU contention.
  • Dedicated RPCs: Under 5ms.

The Fix: Use Dedicated RPCs. Shared nodes are noisy neighbors. A dedicated node ensures your requests have exclusive access to the CPU's instruction pipeline.

3. Propagation Latency (Gossip)

Once the RPC has validated your transaction, it must forward it to the current leader.

  • Standard Gossip: The transaction hops from node to node. Each hop adds 10-50ms of latency.
  • Direct Forwarding: The RPC connects directly to the leader's TPU port via QUIC.

The Fix: Use a provider like AllenHark that utilizes QUIC-based direct forwarding and Staked Connections. We maintain persistent connections to the top validators, allowing us to push your transaction directly to the block producer without intermediate hops.

4. Slot Lag (Distance to Leader)

Solana rotates leaders based on a schedule. If the current leader is in Tokyo and your RPC is in New York, you have a physical distance penalty for that specific slot.

The Fix: Geodistributed RPCs. You need an infrastructure provider that intelligently routes your request to an egress node physically closest to the current leader.

Measuring True Latency

Do not rely on ping to measure RPC performance. Ping only measures ICMP response time, which is handled by the network card, not the RPC software.

To measure RPC latency, time a lightweight call like getSlot or getVersion.

# This measures the full round-trip time including SSL handshake and JSON processing
time curl -X POST -H "Content-Type: application/json" \
     -d '{"jsonrpc":"2.0","id":1, "method":"getSlot"}' \
     https://your-rpc.com

The AllenHark Advantage

At AllenHark, we optimize every layer of the stack:

  1. Hardware: We run on Ryzen 7950X and 9950X servers in TeraSwitch Frankfurt (the heart of Solana).
  2. Network: We have direct PNI (Private Network Interconnect) to major validators.
  3. Software: Our custom Rust implementations strip away the overhead of standard validator software.

Stop fighting physics. Move your infrastructure to where the action is.