Benchmarking a web server shows how quickly a single endpoint responds and how latency or errors change as concurrency rises. A short, controlled ApacheBench run helps confirm whether a code, config, or infrastructure change improved capacity, introduced a regression, or pushed the service beyond a safe request rate.
ApacheBench (ab) opens multiple client connections to one URL and repeats the same request until it reaches a target request count (-n) or a fixed time limit (-t). Its report highlights fields such as Requests per second, Time per request, transfer rate, and percentile timing so the same scenario can be compared across repeated runs.
Because ab benchmarks one URL at a time and only partially implements HTTP/1.x, it is best for quick endpoint checks rather than browser-like traffic patterns. Use a URL with an explicit path such as
http://host.example.net/
, start with low concurrency so the client host does not become the bottleneck, and run higher-load tests only where permission exists.
Related: How to authenticate with a bearer token in cURL
Related: How to send custom headers with wget
Related: How to test Apache configuration
ab rejects URLs that stop at the host name.
$ ab -n 1 -c 1 http://host.example.net ab: invalid URL Usage: ab [options] [http[s]://]hostname[:port]/path ##### snipped #####
$ ab -n 20 -c 2 http://host.example.net/ ##### snipped ##### Complete requests: 20 Failed requests: 0
If this first run shows failures or non-2xx responses, fix correctness before increasing load.
$ ab -n 1000 -c 10 http://host.example.net/ This is ApacheBench, Version 2.3 <$Revision: 1913912 $> ##### snipped ##### Concurrency Level: 10 Time taken for tests: 0.674 seconds Complete requests: 1000 Failed requests: 0 Requests per second: 1483.21 [#/sec] (mean) Time per request: 6.742 [ms] (mean) Time per request: 0.674 [ms] (mean, across all concurrent requests) Transfer rate: 3121.43 [Kbytes/sec] received
Common pattern: ab -n N -c C [-k] http://host/.
Requests per second is throughput, the first Time per request line is end-to-end latency per request, and the second Time per request line divides the total test time across the concurrency level.
$ ab -n 10000 -c 50 http://host.example.net/
Large concurrency values can overwhelm the target or saturate the client machine, so coordinate production testing and increase load gradually.
$ ab -n 10000 -c 50 -k http://host.example.net/
-k is useful when the application normally serves traffic over persistent connections.
$ ab -t 30 -c 20 -k http://host.example.net/
-t caps the benchmark duration and internally implies a large request count so the timer ends the run.
$ ab -n 2000 -c 20 -l http://host.example.net/
Without -l, a changing response size usually appears as a length failure even when the application is behaving normally.
$ ab -n 1000 -c 20 -H 'Host: host.example.net' -H 'Authorization: Bearer REDACTED' http://203.0.113.10/
Repeat -H for additional headers, and use -A user:pass when the endpoint requires HTTP Basic authentication.
$ cat > payload.json <<'EOF'
{"message":"hello"}
EOF
$ ab -n 500 -c 10 -p payload.json -T 'application/json' http://host.example.net/api/
Benchmarking write operations can create or modify data, so use an idempotent endpoint or a disposable test environment.
$ ab -n 5000 -c 50 -k -e percentiles.csv -g times.tsv http://host.example.net/
-e writes percentile data as CSV, while -g writes tab-separated per-request timing data.
$ ab -n 5000 -c 50 -k http://host.example.net/ > ab-5000-50-k.txt $ grep -E 'Requests per second|Time per request|Failed requests|Non-2xx responses' ab-5000-50-k.txt Failed requests: 0 Requests per second: 34414.88 [#/sec] (mean) Time per request: 1.453 [ms] (mean) Time per request: 0.029 [ms] (mean, across all concurrent requests)
ab -h
when you need additional flags such as -s, -q, or -r.
Keep the URL, headers, body, concurrency, and network path the same when comparing one run against another.