Running several HTTP transfers at the same time with cURL shortens completion time for bulk downloads, large data exports, and sharded datasets, especially on high-latency links. Multiple connections stay active together, so the network link is used more efficiently instead of waiting for each request to finish before starting the next.
Modern cURL builds accept multiple URLs in a single invocation and, from version 7.66.0 onward, can process them concurrently with the -Z or –parallel option. Internally the tool drives the libcurl multi interface, opening several connections, reusing sockets when possible, and reporting progress across all active transfers in one terminal window.
Concurrent transfers rely on a recent cURL version and on sensible concurrency limits that avoid overwhelming either side of the connection. The commands below assume Ubuntu with a repository-provided curl (for example 7.81.0 or newer), but the same flags apply on most Linux systems. Very aggressive parallelism can trigger throttling, consume API quotas quickly, or saturate local CPU and bandwidth, so moderate settings are usually safer than pushing the maximum.
Related: How to save cURL output to a file
Related: How to limit bandwidth in cURL
Steps to run parallel downloads with cURL:
- Open a terminal where shell access is available.
$ whoami root
- Confirm that the installed curl version supports parallel transfers.
$ curl --version curl 8.5.0 (aarch64-unknown-linux-gnu) libcurl/8.5.0 OpenSSL/3.0.13 zlib/1.3 brotli/1.1.0 zstd/1.5.5 libidn2/2.3.7 libpsl/0.21.2 (+libidn2/2.3.7) libssh/0.10.6/openssl/zlib nghttp2/1.59.0 librtmp/2.3 OpenLDAP/2.6.7 Release-Date: 2023-12-06, security patched: 8.5.0-2ubuntu10.6 Protocols: dict file ftp ftps gopher gophers http https imap imaps ldap ldaps mqtt pop3 pop3s rtmp rtsp scp sftp smb smbs smtp smtps telnet tftp Features: alt-svc AsynchDNS brotli GSS-API HSTS HTTP2 HTTPS-proxy IDN IPv6 Kerberos Largefile libz NTLM PSL SPNEGO SSL threadsafe TLS-SRP UnixSockets zstd
cURL version 7.66.0 or newer understands -Z and –parallel for concurrent transfers; older builds either ignore these options or exit with an error.
- Create a configuration file that lists each url to download and an output directive for the destination file when needed.
$ cat > urls.conf << 'EOF' url = "http://downloads.example.net/file.bin" output = "data-part1.bin" url = "http://downloads.example.net/video.mp4" output = "data-part2.bin" url = "http://downloads.example.net/dataset.bin" output = "data-part3.bin" EOF
Options in the configuration file use the same syntax as /home/user/.curlrc, so headers, cookies, authentication tokens, and other per-request settings can be added alongside each url.
- Run curl in parallel mode against the configuration file with a conservative concurrency limit.
$ curl --parallel --parallel-max 3 --parallel-immediate --config urls.conf DL% UL% Dled Uled Xfers Live Total Current Left Speed -- -- 0 0 3 3 --:--:-- --:--:-- --:--:-- 0 100 -- 8704k 0 3 0 --:--:-- --:--:-- --:--:-- 499M
Increasing –parallel-max too high can overload remote servers, exhaust API quotas, or congest local network links; the default upper limit without this option is 50 concurrent transfers.
- Optionally run a small set of URLs in parallel directly on the command line without a configuration file.
$ curl --parallel --parallel-max 3 \ -O http://downloads.example.net/file.bin \ -O http://downloads.example.net/video.mp4 \ -O http://downloads.example.net/dataset.bin
The -O (remote name) flag saves each transfer using the file name from the URL, which keeps the invocation shorter when explicit output directives are unnecessary.
- Verify that every expected file exists and has a non-zero size when the transfers finish.
$ ls -lh data-part1.bin data-part2.bin data-part3.bin -rw-r--r-- 1 root root 512K Jan 10 05:38 data-part1.bin -rw-r--r-- 1 root root 5.0M Jan 10 05:38 data-part2.bin -rw-r--r-- 1 root root 3.0M Jan 10 05:38 data-part3.bin
A successful batch usually shows a zero exit status from curl, progress lines without errors, and all target files present with plausible sizes.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
