PONG// pong.com v3.0OPERATIONAL
pong@pong-com methodology$

Pong.com Methodology

Pong.com is a browser-based speed test that runs against 16 dedicated Linode servers worldwide and the nearest Cloudflare edge node. Where many traditional speed tests limit themselves to a single download number measured inside an ISP network, Pong runs a five-stage test across the public internet and reports latency, jitter, sustained download, sustained upload, bufferbloat under load, and packet loss. The full methodology is documented below and is reproducible end to end. There is no hidden scoring algorithm.

// The test flow

  1. Step 1
    Detect the nearest server
    On page load the browser races small probes against all 16 Linode servers and the nearest Cloudflare edge. The lowest-latency endpoint is auto-selected; users can override the choice from the server picker.
  2. Step 2
    Run the latency test
    50 sequential HTTP round trips to the chosen server. We compute min, median, p95, p99, and jitter (standard deviation of inter-arrival times). The first two probes are discarded to avoid TCP and TLS warm-up bias.
  3. Step 3
    Run the download test
    Parallel HTTP GETs of large random-byte payloads, with concurrency that ramps from 1 to 8 streams. We measure sustained throughput over a 10 to 15 second window after warm-up. Slow-start and TCP congestion control are allowed to settle before measurement begins.
  4. Step 4
    Run the upload test
    POST chunked uploads of random-byte payloads using the same concurrency-ramp pattern. Random bytes prevent any compression on the wire from inflating the result. Sustained throughput is reported, not peak.
  5. Step 5
    Run the bufferbloat test
    Latency probes are repeated while the connection is saturated by a download stream and again while saturated by an upload stream. The increase over baseline ping is reported as bufferbloat in milliseconds, with letter grades from A (under 30ms) to F (over 200ms).

// Why these metrics

Bufferbloat

Routers buffer packets to keep their bandwidth numbers high under load. The cost is latency: a connection that pings 15ms idle can balloon to 300ms or more once a download is saturating the line. That is what makes a video call freeze when someone else uploads a video, and it is invisible to single-number speed tests. We measure the delta between idle ping and ping under load, in both directions, and grade it.

Jitter

Average latency tells you how fast packets get there. Jitter tells you how consistent the timing is. A 30ms ping with 2ms jitter feels rock solid; a 30ms ping with 40ms jitter feels worse than a stable 60ms ping because every other packet arrives off-beat. Voice and video codecs can hide a small amount of jitter with playback buffers, but past a threshold the buffer drains and packets are dropped.

Why download alone is insufficient

A 1Gbps download is irrelevant to a Zoom call that needs 4Mbps up and stable latency. A 500Mbps download is irrelevant to a Warzone session that needs sub-50ms ping with sub-10ms jitter. Picking connections by raw download is like picking cars by top speed without checking whether the brakes work. Pong measures all the failure modes that matter to real applications, then translates them into per-use-case experience scores.

// Server network

16 dedicated Linode (Akamai) servers, each with a one-gigabit symmetric uplink, plus the nearest Cloudflare edge node for latency and jitter. The browser auto-selects the lowest-latency server but users can pin a specific one to compare routes.

Code
City
Role
EWR
Newark
US East primary
ATL
Atlanta
US Southeast
DFW
Dallas
US South central
FMT
Fremont
US West (Bay Area)
ORD
Chicago
US Midwest
SEA
Seattle
US Pacific Northwest
MIA
Miami
US Southeast / LATAM gateway
LAX
Los Angeles
US West (LA metro)
YYZ
Toronto
Canada
LHR
London
UK and Western Europe
FRA
Frankfurt
Continental Europe hub
NRT
Tokyo
Japan and East Asia
SIN
Singapore
Southeast Asia hub
BOM
Mumbai
India and South Asia
SYD
Sydney
Australia and Oceania
GRU
Sao Paulo
South America

// Data sources

All speed-test results on Pong.com come from our own measurements against our own servers. Nothing on the test page is sourced from third parties. For the analysis content on the blog and on city, state, and ISP pages, we cite and cross-reference the following public datasets:

  • FCC Broadband Data Collection (BDC). Authoritative US broadband availability and provider coverage.
  • Ookla open data. Quarterly tile-level performance datasets used for cross-checks.
  • NTIA broadband maps. Federal funding allocation and underserved-area data.
  • BroadbandNow. Provider directory and pricing references.
  • Measurement Lab (M-Lab). Independent open measurement data, useful for sanity-checking regional trends.

// Accuracy and limitations

Pong measures what your browser sees over TCP and HTTP. That is the same protocol stack used by streaming video, video calls, cloud storage, web apps, and most modern software, so the numbers map directly to the experience of using those apps. The measurement is honest about what a browser can and cannot see.

What we cannot do from a browser is talk raw UDP or measure ping with the same overhead a native game client gets. Real games use UDP and have direct kernel paths to the network stack; a browser test adds roughly 5 to 15 milliseconds of overhead from the HTTP/TCP/TLS layers. Treat Pong's latency numbers as an upper bound on what an in-game ping should be against the same target server. The relative comparisons (server A vs server B, before router fix vs after) are unaffected by the overhead and remain accurate.

We also cannot measure links beyond the test endpoint. A test from your house to our Newark server tells you about the path between those two endpoints. If your specific game server lives in a different facility on a different network, your in-game numbers may differ. The 16-city server mesh is designed to keep at least one Pong endpoint close to wherever your real traffic is going, but it is not a substitute for measuring the exact path of a specific application.

// Open methodology

Pong publishes its full methodology because trust in a measurement tool requires the ability to reproduce its results. Most major consumer speed tests, including the largest commercial providers, keep their server selection, scoring, and aggregation logic proprietary. That makes it difficult to know why two tools disagree, or to evaluate whether a result is meaningful.

The page you are reading describes every measurement Pong performs, the order they run in, the parameters used, and the limitations of each. For a deeper background read, see How internet speed tests work.

// Related

  • Run a speed test: Try the test described on this page
  • About Pong.com: Why we built it and who built it
  • How internet speed tests work: Background reading on speed-test design
  • Bufferbloat test: Standalone test for the latency-under-load metric
  • Ping test: 50-sample latency analysis with full percentiles
  • Global latency map: Latency to all 16 server locations from your browser
  • Tools: Full diagnostic kit (ping, traceroute, DNS, IP)