Limiting transfer bandwidth in cURL keeps scripted downloads from flooding shared WAN links, VPN tunnels, or metered connections. A predictable ceiling is useful when backups, mirrors, or large artifact pulls need to run in the background without starving interactive traffic.
The throttle is set with --limit-rate, which caps the transfer rate for the active cURL operation. The value is expressed in bytes per second unless a 1024-based suffix such as K, M, or G is appended, and the same option works for both downloads and uploads.
This is a client-side average, not a network QoS policy. Short spikes around the target value are normal because cURL smooths the rate over several seconds, so confirm the effective speed with the progress meter or {speed_download} before treating the cap as an operational ceiling.
Related: How to run parallel downloads with cURL
Related: How to measure request timing with cURL
Related: How to handle timeouts in cURL
Steps to limit bandwidth in cURL:
- Run the transfer with --limit-rate and write the response to a file so the progress meter can show the capped speed.
$ curl --limit-rate 200K --output toolkit-2026.03.0-linux-amd64.tar.zst "https://artifacts.example.com/releases/toolkit-2026.03.0-linux-amd64.tar.zst" % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 2048k 100 2048k 0 0 205k 0 0:00:09 0:00:09 --:--:-- 211kThe live transfer rate shown in the progress meter should settle near the configured ceiling once the transfer is underway.
- Choose the ceiling in bytes per second or with a 1024-based suffix that matches the link budget.
$ curl --limit-rate 80K --output site-assets-2026.03.tar.zst "https://artifacts.example.com/releases/site-assets-2026.03.tar.zst" $ curl --limit-rate 1M --output base-image-amd64.iso "https://mirror.example.com/images/base-image-amd64.iso" $ curl --limit-rate 10M --output nightly-db-backup.tar.zst "https://backup.example.com/exports/nightly-db-backup.tar.zst"
--limit-rate accepts raw bytes per second or suffixes such as K, M, and G, where 1K equals 1024 bytes.
- Measure the achieved download rate with --write-out when the cap needs to be checked in scripts or repeated tests.
$ curl --silent --show-error --limit-rate 200K --output /dev/null --write-out "speed_download=%{speed_download}\ntime_total=%{time_total}\nsize_download=%{size_download}\n" "https://artifacts.example.com/releases/toolkit-2026.03.0-linux-amd64.tar.zst" speed_download=213435 time_total=9.825705 size_download=2097152speed_download is the average speed for the full transfer, so values slightly above or below the nominal cap are expected.
- Apply the same flag to upload workflows when the remote endpoint accepts PUT, FTP, or SFTP uploads.
$ curl --limit-rate 120K --upload-file ./ops-report-2026-03-29.tar.gz "sftp://sftp.example.com/home/ops/uploads/ops-report-2026-03-29.tar.gz"
--limit-rate slows the client side only. Authentication, server-side quotas, and protocol support for the chosen upload URL still need to be handled separately.
- Use --rate only when the job needs to slow how often new transfers start, not how fast an active transfer moves data.
$ curl --rate 3/m --remote-name "https://artifacts.example.com/releases/toolkit-2026.03.0-linux-amd64.tar.zst" --remote-name "https://artifacts.example.com/releases/toolkit-2026.03.0-linux-arm64.tar.zst"
--limit-rate caps throughput for one transfer, while --rate controls transfer start frequency in multi-URL jobs and does not affect --parallel runs.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
