Infrastructure

The Nanosecond Economy: Engineering HFT Infrastructure That Prints Money

How we reduced tick-to-trade latency from 12µs to 2.8µs at a top HFT desk. FPGA feed handlers, kernel bypass, and the $50M/year cost of 1 microsecond.

4 min
#hft #latency #fpga #trading #kernel-bypass #cefi #infrastructure

At a top-tier HFT desk, 1 microsecond of latency costs $50M/year in lost alpha.

I know this because we measured it. When we reduced tick-to-trade from 12µs to 2.8µs, the desk’s fill rate on competitive orders increased by 23%. The math was simple: faster execution means you’re first in queue. First in queue means you get filled. Getting filled on a 0.02% edge, 10,000 times a day, compounds to $50M.

This post documents the infrastructure that makes sub-3µs trading possible-not the algorithms, but the plumbing.

1. The Physics of Speed

At HFT timescales, everything is physics:

ComponentLatencyNotes
Light through 1m fiber5nsSpeed of light
L3 cache access10-20nsOn-die
DRAM access60-100nsOff-die
Kernel syscall200-500nsContext switch
Network interrupt1-5µsIRQ handling
TCP stack5-10µsKernel networking

Insight: If your trade decision takes 1µs, but your network stack takes 10µs, you’ve already lost.

The Latency Budget

A competitive HFT system allocates its latency budget ruthlessly:

Market Data (500ns) Feed Handler (200ns) Strategy (800ns) Risk (100ns) Order Send (1µs)

Total Budget: ~2.6µs tick-to-trade.

2. The Decision Matrix: Feed Handlers

The first bottleneck is market data ingestion. Every exchange sends a firehose of quotes.

ApproachLatencyCostVerdict
A. Software (C++ on Linux)3-5µs$50K/yearBaseline. Acceptable for MM, not for latency arb.
B. Kernel Bypass (DPDK/Solarflare)500ns-1µs$100K/yearBetter. Eliminates kernel overhead.
C. FPGA Feed Handler50-200ns$500K/yearSelected. Wire-speed parsing.

Why FPGA? An FPGA parses the packet as it arrives, byte by byte. There is no store-and-forward. By the time the last byte of a quote arrives, the parsed price is already in your strategy’s cache.

3. The Kill: Kernel Bypass Networking

If FPGA is out of budget, kernel bypass with Solarflare/Mellanox NICs gets you 80% of the way.

Step 1: Enable OpenOnload

# Install OpenOnload (Solarflare kernel bypass stack)
onload --profile=latency myapp

# Verify bypass is active
onload_stackdump | grep "UDP\|TCP"

Step 2: Pin to NUMA Node

# Bind application to NUMA node 0 (where NIC is attached)
numactl --cpunodebind=0 --membind=0 ./trading_engine

Step 3: Disable Interrupt Coalescing

# Solarflare: disable adaptive coalescing
ethtool -C eth0 adaptive-rx off rx-usecs 0 rx-frames 1

Verification:

# Before: 8-12µs RTT
# After: 1-2µs RTT
ping -c 100 <exchange_gateway> | tail -1

4. The Tool: Continuous Latency Monitoring

Don’t trust, verify. Every microsecond matters.

pip install latency-audit && latency-audit --check network

This verifies:

  • Kernel bypass is active (no packets through kernel stack)
  • Interrupt coalescing is disabled
  • NUMA topology is correct

5. Systems Thinking: The Trade-offs

  1. FPGA vs Software: FPGAs are 10x faster but 10x harder to debug. Your strategy complexity is limited by FPGA development velocity.

  2. Kernel Bypass vs Observability: When you bypass the kernel, you lose tcpdump, netstat, and standard debugging. You need custom tooling.

  3. Cost Curve: The last microsecond costs 10x more than the first. Know when to stop optimizing.

6. The Philosophy

In HFT, infrastructure is not a cost center. It is a profit function.

The difference between a profitable desk and a losing one is often not the algorithm-it’s the 2 microseconds of network latency that determine whether you’re first or second in queue. In a zero-sum game, second place is the first loser.

When someone asks about your edge, the honest answer is often: “Our plumbing is better.”

Share: LinkedIn X