Running several downloads at the same time with cURL shortens artifact staging, mirror pulls, and bulk file collection because the network can stay busy while more than one transfer is in flight.
Without extra flags, cURL handles multiple URLs one by one. --parallel starts them together, --parallel-max limits how many transfers can run at once, and --remote-name-all with --output-dir stores each response under its remote filename in one destination directory. By default, cURL waits briefly to see whether later transfers can reuse or multiplex an existing connection; --parallel-immediate changes that preference when startup speed matters more than keeping the connection count low.
Parallel downloads still need sane file naming and a conservative concurrency limit. Use a clean target directory or names that are safe to overwrite, keep the batch small for shared or rate-limited endpoints, and add --remove-on-error so failed transfers do not leave partial files behind. Newer cURL 8.16.0 and later builds add --parallel-max-host for per-host caps, but the flow below stays on options that are already common in older packaged builds.
Steps to run parallel downloads with cURL:
- Create a destination directory for the files that cURL will save.
$ mkdir -p downloads
--output-dir writes every downloaded file into this directory and the transfer fails if the directory does not already exist.
- Start the batch with --parallel, keep the concurrency limit small, and save each URL by its remote filename.
$ curl --parallel \ --parallel-max 3 \ --remote-name-all \ --output-dir downloads \ --fail --show-error \ --remove-on-error \ https://dl.example/notes.txt \ https://dl.example/amd64.tgz \ https://dl.example/arm64.tgz
--remote-name-all applies the remote filename rule to every URL, while --remove-on-error deletes a target file when that transfer ends in error instead of leaving a partial download behind. The hero image above shows the parallel progress meter from this same command.
- Confirm that every expected file exists in the output directory before you use the batch output.
$ ls -lh \ downloads/notes.txt \ downloads/amd64.tgz \ downloads/arm64.tgz -rw-r--r-- 1 user user 1.3K Apr 22 10:30 downloads/notes.txt -rw-r--r-- 1 user user 64K Apr 22 10:30 downloads/amd64.tgz -rw-r--r-- 1 user user 68K Apr 22 10:30 downloads/arm64.tgz
Missing files, zero-byte files, or obviously incomplete sizes mean one or more transfers failed. Rerun only the affected URLs instead of repeating a successful batch.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
