How This Site Measures Latency
A deep dive into real-time latency measurement using the Performance API, Vercel Edge, and trading-grade observability techniques.
🎯 What You'll Learn
- Understand how the RTT badge measures real latency
- Learn the Performance API's timing model
- See how edge computing reduces latency
- Apply these techniques to your own systems
📚 Prerequisites
Before this lesson, you should understand:
Why Should You Care?
In trading, milliseconds are money. A 1ms advantage in execution translates to capturing price moves before competitors. But how do you even measure latency accurately?
The RTT badge in the bottom-right of this site isn’t decoration-it’s a live measurement of network latency between your browser and the edge. Here’s exactly how it works.
What You’ll Learn
By the end of this lesson, you’ll understand:
- The Performance API - Browser-native timing with sub-millisecond precision
- RTT vs. TTFB - What each metric tells you (and doesn’t)
- Edge Computing - Why Vercel Edge functions change the latency math
- Statistical Sampling - Why we show σ± alongside RTT
The Foundation: What Is RTT?
Round-Trip Time (RTT) = Time for a request to travel from client → server → client.
Your Browser ──────────────────────────→ Server
│ │
│ ← Time A (Request leaves) │
│ │
│ │ ← Processes request
│ │
│ ← Time B (Response arrives) │
│ │
RTT = Time B - Time A
RTT includes:
- Network latency (physical distance, routing)
- TLS handshake (on first request)
- Server processing (ideally near-zero for pings)
The “Aha!” Moment
Here’s the insight that separates latency experts from everyone else:
Browser-reported latency and actual server-side latency are different things. The Performance API measures what your user experiences, which includes TCP, TLS, and queueing-not just your handler’s execution time.
This is why a “fast” server can still feel slow. You need to measure the entire path.
Let’s See It In Action: Measuring RTT
Here’s how the RTT badge actually works. Open your browser’s DevTools console and run:
// Measure RTT to this site's ping endpoint
const measure = async () => {
const start = performance.now();
await fetch('/api/ping?t=' + Date.now(), {
cache: 'no-store',
mode: 'cors'
});
const end = performance.now();
const rtt = end - start;
console.log(`RTT: ${rtt.toFixed(2)}ms`);
return rtt;
};
// Run 5 samples
const samples = [];
for (let i = 0; i < 5; i++) {
samples.push(await measure());
await new Promise(r => setTimeout(r, 1000));
}
// Calculate mean and standard deviation
const mean = samples.reduce((a, b) => a + b) / samples.length;
const std = Math.sqrt(
samples.map(x => (x - mean) ** 2).reduce((a, b) => a + b) / samples.length
);
console.log(`Average RTT: ${mean.toFixed(1)}ms σ±${std.toFixed(1)}ms`);
Try it now and compare your RTT to the badge. They should match!
Why We Show Standard Deviation (σ±)
A single RTT measurement is noisy. Network conditions change. Garbage collection spikes. Background tabs compete for resources.
That’s why the badge shows σ± (standard deviation):
- Low σ (< 5ms): Stable connection, consistent routing
- High σ (> 20ms): Jitter, congestion, or mobile network
For trading systems, jitter matters as much as average latency. A 10ms average with 50ms spikes is worse than a stable 15ms.
Common Misconceptions
Let’s clear up some latency myths:
Myth: “My server responds in 2ms, so my latency is 2ms.”
Reality: Server processing is usually the smallest component. Network RTT, TLS handshakes, and TCP slow-start often dominate. A 2ms server response becomes 80ms+ on the first page load.
Myth: “CDNs fix latency.”
Reality: CDNs reduce latency for cacheable content. API calls and personalized content still hit origin servers. Edge functions (like Vercel Edge) solve this by running code at the edge.
Myth: “Ping and HTTP latency are the same.”
Reality: ICMP ping measures L3/L4 only. HTTP includes DNS resolution, TCP handshake, TLS negotiation, and HTTP parsing. HTTP latency is always higher.
The Performance API Deep Dive
The browser’s Performance API gives you forensic-level timing data:
// After fetching, examine the full timing breakdown
const entries = performance.getEntriesByType('resource');
const pingEntry = entries.find(e => e.name.includes('/api/ping'));
if (pingEntry) {
console.log({
dns: pingEntry.domainLookupEnd - pingEntry.domainLookupStart,
tcp: pingEntry.connectEnd - pingEntry.connectStart,
tls: pingEntry.secureConnectionStart > 0
? pingEntry.connectEnd - pingEntry.secureConnectionStart
: 0,
request: pingEntry.responseStart - pingEntry.requestStart,
response: pingEntry.responseEnd - pingEntry.responseStart,
total: pingEntry.responseEnd - pingEntry.startTime,
});
}
This breaks down where time is spent-crucial for optimization.
Why Edge Computing Changes Everything
Traditional architecture:
User (Tokyo) → CDN → Origin (Virginia)
↓
300ms RTT
Edge architecture (what this site uses):
User (Tokyo) → Vercel Edge (Tokyo) → Response
↓
15ms RTT
The edge function runs in the same region as the user. No transatlantic round-trips for API calls.
Practice Exercise
Your mission: Compare RTT across different conditions.
- Baseline: Run the RTT measurement code above
- VPN Test: Connect to a VPN in a different continent, re-measure
- Mobile Test: Try from your phone on cellular
- Cache Warm: Run the measurement 10 times, compare first vs. last
What do you observe about TLS handshake impact on first request?
Key Takeaways
- RTT ≠ server latency - Measure the full path, not just your handler
- Show variance, not only averages - σ± reveals connection stability
- Edge code beats origin code - For latency-sensitive endpoints
- The Performance API is powerful - Use it to diagnose exactly where time goes
What’s Next?
🎯 Continue learning: What Is Latency? for the fundamentals
🔬 Expert version: How This Site Measures Latency: Full Technical Deep-Dive
Now you know exactly what that RTT badge measures-and how to measure it yourself. ⏱️
Pro Version: See the full research: PTP Time Synchronization
Questions about this lesson? Working on related infrastructure?
Let's discuss