Running several downloads at the same time with cURL shortens artifact staging, mirror pulls, and bulk file collection because the network can stay busy while more than one transfer is in flight.
Without extra flags, cURL handles multiple URLs one by one. --parallel starts them together, --parallel-max limits how many transfers can run at once, and --remote-name-all with --output-dir stores each response under its remote filename in one destination directory. By default, cURL waits briefly to see whether later transfers can reuse or multiplex an existing connection; --parallel-immediate changes that preference when startup speed matters more than keeping the connection count low.
Parallel downloads still need sane file naming and a conservative concurrency limit. Use a clean target directory or names that are safe to overwrite, keep the batch small for shared or rate-limited endpoints, and add --remove-on-error so failed transfers do not leave partial files behind. Newer cURL 8.16.0 and later builds add --parallel-max-host for per-host caps, but the flow below stays on options that are already common in older packaged builds.
$ mkdir -p downloads
--output-dir writes every downloaded file into this directory and the transfer fails if the directory does not already exist.
$ curl --parallel \ --parallel-max 3 \ --remote-name-all \ --output-dir downloads \ --fail --show-error \ --remove-on-error \ https://dl.example/notes.txt \ https://dl.example/amd64.tgz \ https://dl.example/arm64.tgz
--remote-name-all applies the remote filename rule to every URL, while --remove-on-error deletes a target file when that transfer ends in error instead of leaving a partial download behind. The hero image above shows the parallel progress meter from this same command.
$ ls -lh \ downloads/notes.txt \ downloads/amd64.tgz \ downloads/arm64.tgz -rw-r--r-- 1 user user 1.3K Apr 22 10:30 downloads/notes.txt -rw-r--r-- 1 user user 64K Apr 22 10:30 downloads/amd64.tgz -rw-r--r-- 1 user user 68K Apr 22 10:30 downloads/arm64.tgz
Missing files, zero-byte files, or obviously incomplete sizes mean one or more transfers failed. Rerun only the affected URLs instead of repeating a successful batch.