The Pump.fun trenches are brutal. Thousands of new tokens launch every day, and for every 100x gem, there are 10,000 bots fighting to buy the first candle.
If you are trying to compete using a JavaScript bot running on your laptop, you have already lost. You are bringing a plastic knife to a thermonuclear war.
This guide is not for the faint of heart. We are going to tear down the "easy" way of doing things and build a sniper the right way: using Rust, raw UDP packets, and direct TPU communication.
Table of Contents
- The Architecture of a Winning Sniper
- Why Node.js is a Dead End (With Benchmarks)
- Step 1: Ingesting the Shred Stream
- Step 2: Decoding the Matrix (Parsing Shreds)
- Step 3: Detecting the Pump.fun Create Instruction
- Step 4: Building the Swap Transaction
- Step 5: Bypassing RPCs with a TPU Client
- Infrastructure: The Physical Layer
- Conclusion
The Architecture of a Winning Sniper
A winning sniper is not a single script. It is a pipeline of highly optimized components designed to minimize latency at every stage.
The Pipeline
- Ingest Layer (0.02ms): Listens to raw UDP "Shreds" from the Turbine protocol. This is the raw data validators share with each other before a block is even finalized.
- Filter Layer (0.01ms): Discards 99.9% of traffic (Vote transactions, simple transfers) to focus only on Program ID
6EF8rrect8qv5F.... - Signal Layer (0.05ms): Decodes the instruction data to find the
Createmethod and extracts the new Token Mint Address. - Execution Layer (0.10ms): Signs a pre-built transaction with the new Mint Address.
- Network Layer (Speed of Light): Blasts the transaction directly to the current Leader's TPU (Transaction Processing Unit) port via UDP.
Total Internal Latency: < 0.5ms.
Compare this to a standard RPC bot:
- Poll Log: 500ms
- Parse JSON: 50ms
- Build Tx: 20ms
- Send to RPC: 100ms
- RPC Forwarding: 200ms
- Total: ~870ms.
You are literally 1,700x slower.
Why Node.js is a Dead End (With Benchmarks)
We ran a benchmark comparing a highly optimized Node.js bot (using @solana/web3.js) against our Rust implementation. Both were running on the same hardware (Ryzen 9950X Pro).
Benchmark: Parsing 10,000 Transactions
| Metric | Node.js (V8 Engine) | Rust (LLVM Optimized) | Difference |
|---|---|---|---|
| Average Parse Time | 450 µs | 12 µs | 37x Faster |
| P99 Latency | 2,100 µs | 18 µs | 116x Faster |
| GC Pauses (per min) | 14 | 0 | Infinite |
| Memory Footprint | 450 MB | 22 MB | 20x Smaller |
The Garbage Collection Trap
In Node.js, the Garbage Collector (GC) stops your code to clean up memory. These "Stop-the-World" pauses can last 10ms to 50ms.
In Solana, a slot is 400ms. If your GC kicks in during the critical window when a leader is accepting packets, you miss the block. Rust has no Garbage Collector. It manages memory manually at compile time. It never pauses.
Step 1: Ingesting the Shred Stream
We don't use connection.onLogs. We use a Geyser plugin or a raw Shred stream. For this example, we'll assume you are connecting to the AllenHark Shred Stream via gRPC.
Rust Dependencies
Add these to your Cargo.toml:
[dependencies]
tokio = { version = "1.0", features = ["full"] }
solana-sdk = "1.18"
solana-client = "1.18"
tonic = "0.10" # For gRPC
bincode = "1.3"
yellowstone-grpc-proto = "1.4"
Connecting to the Stream
use yellowstone_grpc_proto::geyser::geyser_client::GeyserClient;
use yellowstone_grpc_proto::geyser::{SubscribeRequest, SubscribeUpdate};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Connect to AllenHark's low-latency gRPC endpoint
let mut client = GeyserClient::connect("http://shreds.allenhark.com:10000").await?;
let request = SubscribeRequest {
transactions: HashMap::new(), // We'll filter later for speed
blocks: HashMap::new(),
accounts: HashMap::new(),
..Default::default()
};
let mut stream = client.subscribe(request).await?.into_inner();
while let Some(message) = stream.message().await? {
match message.update_oneof {
Some(SubscribeUpdate::Transaction(tx)) => {
process_transaction(tx);
}
_ => {}
}
}
Ok(())
}
Step 2: Decoding the Matrix (Parsing Shreds)
The raw transaction data comes in as bytes. We need to deserialize it extremely fast.
use solana_sdk::transaction::VersionedTransaction;
fn process_transaction(tx_update: SubscribeUpdateTransaction) {
let tx_data = tx_update.transaction.unwrap();
// Zero-copy deserialization if possible, or fast bincode
let tx: VersionedTransaction = bincode::deserialize(&tx_data.data).unwrap();
// Check if this transaction interacts with Pump.fun
// Pump.fun Program ID: 6EF8rrect8qv5F...
let account_keys = tx.message.static_account_keys();
let pump_program_id = Pubkey::from_str("6EF8rrect8qv5F...").unwrap();
if account_keys.contains(&pump_program_id) {
analyze_pump_instruction(&tx);
}
}
Step 3: Detecting the Pump.fun Create Instruction
Now we need to look inside the transaction to see if it's a Create instruction (new token launch). Pump.fun uses an 8-byte discriminator for instructions.
fn analyze_pump_instruction(tx: &VersionedTransaction) {
for ix in &tx.message.instructions() {
// Pump.fun "Create" discriminator (sha256("global:create")[..8])
// This is a constant you should pre-calculate
let create_discriminator: [u8; 8] = [24, 30, 200, 40, 5, 28, 7, 119];
if ix.data.starts_with(&create_discriminator) {
// WE FOUND A NEW TOKEN LAUNCH!
// The Mint address is usually the first account in the instruction
let mint_index = ix.accounts[0];
let mint_address = tx.message.static_account_keys()[mint_index as usize];
println!("SNIPER ALERT: New Token Detected: {:?}", mint_address);
// TRIGGER BUY IMMEDIATELY
execute_buy(mint_address);
}
}
}
Step 4: Building the Swap Transaction
You cannot wait to "ask" the RPC for a blockhash. You must maintain a cache of the latest blockhash in a background thread so it's ready instantly.
fn execute_buy(mint: Pubkey) {
let payer = Keypair::read_from_file("~/.config/solana/id.json").unwrap();
let recent_blockhash = BLOCKHASH_CACHE.read().unwrap(); // From background thread
// Construct the swap instruction (Raydium or Pump.fun swap)
let swap_ix = build_pump_swap_ix(
&payer.pubkey(),
&mint,
1_000_000, // Amount of SOL (lamports) to spend
0, // Min output (slippage)
);
let tx = Transaction::new_signed_with_payer(
&[swap_ix],
Some(&payer.pubkey()),
&[&payer],
*recent_blockhash,
);
send_to_tpu(tx);
}
Step 5: Bypassing RPCs with a TPU Client
Standard sendTransaction uses the gossip protocol. It propagates your transaction randomly through the network until it hits the leader. This adds 200-500ms of latency.
We want to send it directly to the leader's UDP port.
use solana_client::tpu_client::{TpuClient, TpuClientConfig};
fn send_to_tpu(tx: Transaction) {
// Create a TPU client connected to the current cluster
let rpc_client = Arc::new(RpcClient::new("http://127.0.0.1:8899"));
let ws_url = "ws://127.0.0.1:8900";
let tpu_client = TpuClient::new(
rpc_client,
ws_url,
TpuClientConfig::default(),
).unwrap();
// Send the transaction directly to the current leader
// This uses QUIC or UDP depending on configuration
tpu_client.send_transaction(&tx);
println!("Transaction blasted to leader!");
}
Step 6: Using AllenHark Relay (The "Cheat Code")
Building your own TPU client is hard. You have to manage QUIC connections, handle packet drops, and keep an up-to-date leader schedule.
Instead, you can use AllenHark Relay. We maintain persistent, high-performance connections to every validator. You just send your signed transaction to us, and we forward it with sub-millisecond dispatch.
Integration: QUIC Protocol (Recommended)
AllenHark Relay supports QUIC for ultra-low latency (0.1ms possible) and HTTPS for simpler integration.
Endpoint: relay.allenhark.com:4433 (QUIC) or https://relay.allenhark.com/v1/sendTx (HTTPS)
Tip Wallet: harkm2BTWxZuszoNpZnfe84jRbQTg6KGHaQBmWzDGQQ
Minimum Tip: 0.001 SOL (1,000,000 lamports)
Rust Example (QUIC via quinn)
use quinn::{ClientConfig, Endpoint};
use solana_sdk::transaction::Transaction;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let mut endpoint = Endpoint::client("0.0.0.0:0".parse().unwrap())?;
let client_config = ClientConfig::with_native_roots();
endpoint.set_default_client_config(client_config);
// Connect to AllenHark Relay
let connection = endpoint
.connect("151.241.71.10:4433".parse().unwrap(), "relay.allenhark.com")?
.await?;
println!("Connected to AllenHark Relay");
// Open a bidirectional stream
let (mut send, _recv) = connection.open_bi().await?;
// Serialize your transaction (simplified)
let tx_bytes = bincode::serialize(&your_transaction)?;
let tip_signature = create_tip_signature()?; // Your tip transaction signature
// Construct packet: [version, encoding, tx_length, tx_bytes, tip_sig]
let mut packet = vec![1u8, 1u8]; // version=1, encoding=binary
packet.extend_from_slice(&(tx_bytes.len() as u16).to_le_bytes());
packet.extend_from_slice(&tx_bytes);
packet.extend_from_slice(&tip_signature);
send.write_all(&packet).await?;
send.finish().await?;
println!("Transaction sent via QUIC!");
Ok(())
}
Rust Example (HTTPS via reqwest)
use reqwest::Client;
use serde_json::json;
use solana_sdk::bs58;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = Client::new();
// Your signed transaction as base58
let tx_base58 = bs58::encode(tx_bytes).into_string();
let res = client.post("https://relay.allenhark.com/v1/sendTx")
.json(&json!({
"jsonrpc": "2.0",
"id": 1,
"method": "sendTransaction",
"params": [
tx_base58,
{
"skipPreflight": true,
"commitment": "processed"
}
]
}))
.send()
.await?;
let response: serde_json::Value = res.json().await?;
println!("Response: {:?}", response);
Ok(())
}
Transaction Sender Comparison
Why use a Relay instead of standard RPC or other providers?
| Feature | AllenHark Relay | Nozomi | Helius (Atlas) | BloXroute | Standard RPC |
|---|---|---|---|---|---|
| Protocol | QUIC + HTTPS | HTTP | HTTP/gRPC | gRPC | HTTP |
| Latency (QUIC) | 0.1-2ms | ~50ms | ~20ms | ~15ms | ~200ms |
| Leader Connection | Direct PNI (Frankfurt) | Public Internet | Public Internet | Public Internet | Gossip |
| Jito Bundle Support | Native | No | Yes | Yes | No |
| Shred Integration | Yes (0-hop) | No | No | Yes | No |
| Throughput (QUIC) | 10,000 tx/s | 1,000 tx/s | 2,000 tx/s | 5,000 tx/s | 500 tx/s |
| Cost | 0.001 SOL/tx | High % Fee | Monthly Sub | Monthly Sub | Free/Cheap |
Full Documentation: AllenHark Relay Docs
Infrastructure: The Physical Layer
You have the code. It runs in 50 microseconds. But if you run this from your house, you will still lose.
The Speed of Light Problem
- Your House (USA) -> Leader (Frankfurt): ~100ms latency.
- Competitor (Frankfurt) -> Leader (Frankfurt): ~0.1ms latency.
Your competitor is 1,000x faster purely due to physics. No amount of Rust optimization can fix this.
The Solution: Co-Location
You need to rent a bare metal server in the exact datacenter where the validators are. For Solana, the "Holy Trinity" of datacenters are:
- TeraSwitch (Frankfurt, DE) - Highest concentration of stake.
- Equinix TY3 (Tokyo, JP)
- Equinix NY5 (New York, US)
AllenHark offers managed bare metal servers in TeraSwitch Frankfurt. When you rent from us, you aren't just getting a server. You are getting:
- Direct PNI (Private Network Interconnect): A physical cable connecting your rack to the validator's rack.
- Internal Shred Stream: We multicast shreds locally. You get them 200ms faster than public gRPC.
Conclusion
Building a winning Pump.fun sniper is an engineering marvel. It requires mastery of:
- Systems Programming (Rust)
- Networking (UDP/QUIC)
- Cryptography (Ed25519)
- Physical Infrastructure (Co-location)
If you have the skills and the time (3-6 months), following this guide will put you ahead of 99% of the market.
However, if you want to start winning today without writing 10,000 lines of Rust:
Download the AllenHark Bonk Sniper. We have already built the Rust core, optimized the networking, and deployed it in Frankfurt. You just provide the keys and the strategy.