Infrastructure

How to Submit Transactions Directly to the Leader TPU on Solana

March 5, 2026AllenHark Team

How to Submit Transactions Directly to the Leader TPU on Solana

When milliseconds matter — in arbitrage, liquidations, or high-frequency trading — the standard Solana transaction submission path is too slow. Public RPCs send your transaction through the gossip network, adding hundreds of milliseconds of latency before it reaches the current leader.

There's a faster way: submit directly to the leader validator's Transaction Processing Unit (TPU) via UDP.

This guide covers what TPU submission is, how it works at the protocol level, how to implement it yourself, and how to use AllenHark Slipstream to do it without managing infrastructure.

How Solana Processes Transactions

Before diving into TPU submission, it helps to understand the standard flow:

  1. You send a transaction to an RPC node via sendTransaction
  2. The RPC node validates the transaction and forwards it to the current leader via the gossip network
  3. The gossip network propagates the transaction across nodes
  4. The leader validator picks it up from its incoming queue and processes it in its TPU

The TPU is the component inside the validator that actually executes transactions. It consists of several stages:

  • Fetch stage — receives transactions via UDP and QUIC
  • SigVerify stage — verifies transaction signatures in parallel
  • Banking stage — executes transactions against the current bank state
  • Broadcast stage — sends results as shreds to the rest of the network

The key insight: if you send your transaction directly to the leader's fetch stage via UDP, you bypass steps 2 and 3 entirely.

Why Direct TPU Submission Is Faster

PathHopsTypical Latency
Public RPC → Gossip → Leader3+500ms–2s
Private RPC → Leader forward2100–500ms
Direct UDP → Leader TPU11–50ms

Direct TPU submission eliminates every intermediary. Your transaction goes from your machine (or your closest relay worker) straight to the leader's UDP port. The tradeoff: UDP is fire-and-forget. There's no acknowledgment, no retry, and no confirmation receipt.

Requirements for DIY TPU Submission

To submit directly to the leader TPU, you need:

1. Know Who the Current Leader Is

The Solana leader schedule rotates every 4 slots (~1.6 seconds). You need to track the leader schedule in real-time to know which validator is currently producing blocks.

// Fetch leader schedule from RPC
let leader_schedule = rpc_client.get_leader_schedule(None)?;

2. Resolve the Leader's TPU Address

Each validator advertises its TPU address (IP + port) via the gossip network's ContactInfo. You need to look up the current leader's TPU socket.

// Get cluster nodes to find TPU addresses
let nodes = rpc_client.get_cluster_nodes()?;
let leader_tpu = nodes.iter()
    .find(|n| n.pubkey == leader_pubkey)
    .and_then(|n| n.tpu);

3. Serialize and Send via UDP

Solana transactions are sent as raw serialized bytes over UDP to the leader's TPU port.

use std::net::UdpSocket;

let socket = UdpSocket::bind("0.0.0.0:0")?;
let serialized_tx = bincode::serialize(&transaction)?;
socket.send_to(&serialized_tx, leader_tpu_addr)?;
// No response — fire-and-forget

4. Handle the Leader Schedule Race

The leader changes every ~1.6 seconds. If you send to the wrong leader, your transaction is silently dropped. You need to:

  • Track the current slot in real-time (via WebSocket slotSubscribe)
  • Pre-compute the next few leaders
  • Send to both the current and upcoming leader for redundancy
  • Handle epoch boundaries where the schedule changes

5. Monitor Confirmation Separately

Since UDP gives you no response, you need a separate pipeline to know if your transaction landed:

// Option 1: WebSocket signatureSubscribe
ws_client.signature_subscribe(&signature, commitment)?;

// Option 2: Poll getSignatureStatuses
let statuses = rpc_client.get_signature_statuses(&[signature])?;

// Option 3: gRPC subscription (e.g., Yellowstone)
// Subscribe to transaction updates for your signature

Challenges with DIY TPU Submission

While the concept is simple, production-grade TPU submission has several challenges:

  • QUIC migration: Solana validators are migrating from UDP to QUIC for transaction ingestion. QUIC requires a TLS handshake and connection management, making direct submission more complex.
  • Stake-weighted QoS: Validators prioritize transactions from staked connections. Unstaked UDP packets may be deprioritized or dropped under load.
  • Geographic latency: If you're far from the current leader, your UDP packet arrives late. The leader changes every 1.6s — if your packet arrives after the rotation, it's dropped.
  • No retry logic: UDP has no built-in reliability. Dropped packets are silently lost.
  • Leader schedule tracking: You need real-time slot tracking and accurate leader resolution, which requires maintaining your own cluster state.
  • Multi-region complexity: To consistently hit leaders across different data centers, you need relay infrastructure in multiple regions.

Using Slipstream for TPU Submission

AllenHark Slipstream solves all of these challenges. It maintains workers in multiple regions (currently US East Chicago, EU West, EU Central, and Asia Pacific Singapore) that track the leader schedule in real-time and submit directly to the leader's TPU from the closest worker.

Install the SDK

# Rust
cargo add allenhark-slipstream

# TypeScript
npm install @allenhark/slipstream

# Python
pip install AllenHarkSlipstream

Submit via TPU — Rust

use allenhark_slipstream::{Config, SlipstreamClient, SubmitOptions};

let config = Config::builder()
    .api_key("sk_live_...")
    .build()?;

let client = SlipstreamClient::connect(config).await?;

// Fire-and-forget TPU submission
let result = client.submit_transaction_with_options(&tx_bytes, &SubmitOptions {
    tpu_submission: true,
    ..Default::default()
}).await?;

println!("Request ID: {}", result.request_id);
// Status will be "sent" — no confirmation tracking

Submit via TPU — TypeScript

import { SlipstreamClient, configBuilder } from "@allenhark/slipstream";

const config = configBuilder()
    .apiKey("sk_live_...")
    .build();

const client = await SlipstreamClient.connect(config);

const result = await client.submitTransactionWithOptions(txBytes, {
    tpuSubmission: true,
});

console.log("Request ID:", result.requestId);

Submit via TPU — Python

from allenhark_slipstream import SlipstreamClient, config_builder, SubmitOptions

config = config_builder().api_key("sk_live_...").build()
client = await SlipstreamClient.connect(config)

result = await client.submit_transaction_with_options(tx_bytes, SubmitOptions(
    tpu_submission=True,
))

print(f"Request ID: {result.request_id}")

Why Slipstream Over DIY

DIY TPUSlipstream TPU
Leader trackingYou maintain itHandled by workers
Multi-regionYou deploy servers4 regions, auto-routed
QUIC/UDP handlingYou implement itHandled by workers
Stake-weighted QoSYour own stakeSlipstream's staked connections
CostInfrastructure + maintenance2 tokens (0.0001 SOL) per tx
Setup timeDays–weeks5 minutes

Combining TPU with Leader Hints

For optimal timing, subscribe to Slipstream's leader hint stream. Wait until the leader is near a Slipstream worker, then fire:

import { SlipstreamClient, configBuilder } from "@allenhark/slipstream";

const config = configBuilder().apiKey("sk_live_...").build();
const client = await SlipstreamClient.connect(config);

await client.subscribeLeaderHints();

client.on("leaderHint", async (hint) => {
    // Only submit when leader is close to a worker
    if (hint.confidence >= 80) {
        await client.submitTransactionWithOptions(txBytes, {
            tpuSubmission: true,
        });
    }
});

When to Use TPU Submission

Use it for:

  • Latency-critical arbitrage where you need to reach the leader ASAP
  • High-frequency strategies where you monitor confirmation separately
  • Spam-resistant strategies where you send many transactions and only need one to land
  • Time-sensitive liquidations where every millisecond counts

Don't use it for:

  • Transactions that need delivery guarantees — use standard submission instead
  • Bundle submissions — TPU doesn't support atomic bundles
  • Low-frequency trading where 100ms vs 500ms doesn't matter

Further Reading