Large wget transfers can fill a shared connection and make SSH sessions, browser traffic, or calls feel slow. A rate cap keeps the download moving while leaving predictable headroom for everything else on the same link.
GNU wget throttles transfer speed with --limit-rate. Current GNU documentation says the value is in bytes per second, accepts k and m suffixes, and still accepts decimals such as 2.5m for finer tuning.
Wget enforces the cap by sleeping after fast network reads, so the average rate settles near the requested value during a longer transfer instead of staying exact on every progress line. Small files can finish before the limiter stabilizes, and each parallel wget process applies its own ceiling independently.
$ wget --limit-rate=300k https://downloads.example.net/release-4m.bin -O release-300k.bin Length: 4194304 (4.0M) [application/octet-stream] Saving to: 'release-300k.bin' ##### snipped ##### 2026-04-22 07:25:00 (300 KB/s) - 'release-300k.bin' saved [4194304/4194304]
Use a file large enough to run for several seconds, otherwise the final average can finish before the limiter has time to settle.
$ wget --limit-rate=1m https://downloads.example.net/release-4m.bin -O release-1m.bin ##### snipped ##### 2026-04-22 07:25:21 (1022 KB/s) - 'release-1m.bin' saved [4194304/4194304]
Decimal values such as --limit-rate=2.5m remain valid when 300k or 1m is too coarse for the link.
~/.wgetrc limit_rate = 300k
Each command-line --limit-rate value overrides the startup-file default. Related: How to configure default options in ~/.wgetrc
$ ls -lh release-300k.bin release-1m.bin -rw-r--r-- 1 user user 4.0M Apr 22 07:25 release-300k.bin -rw-r--r-- 1 user user 4.0M Apr 22 07:25 release-1m.bin
--limit-rate controls transfer speed only; it does not verify checksums or file integrity after the download finishes.