Interrupted network transfers for large archives, ISO images, or backup files consume bandwidth and time without producing a usable result. Resuming an existing partial file avoids restarting from zero and makes unreliable connections more tolerable, especially over long-distance or congested links. Using wget for this purpose keeps the workflow scriptable and consistent across many Unix-like environments.
The wget command-line tool implements resuming by reusing the local file already present on disk and requesting only the missing portion from the remote server. When invoked with the --continue option, wget inspects the current file size, sends an HTTP Range header or protocol equivalent, and expects a partial response (typically 206 Partial Content) that begins at the correct byte offset. Combining this behavior with verbose output and header inspection simplifies troubleshooting of interrupted downloads.
Reliable resuming depends on server support for byte-range requests and on the remote file remaining unchanged between attempts. If the provider rotates files, modifies content under the same URL, or disables ranges, a partially downloaded file can no longer be trusted and must be discarded. Verifying length and checksums guards against silent corruption, and cautious handling of removal commands prevents accidental loss of unrelated data.
Steps to resume interrupted downloads using Wget:
- Open a terminal session on the system where the partial download is stored.
- Change to the directory that contains the partially downloaded file.
$ cd ~/Downloads $ ls -lh total 1.1G -rw-r--r-- 1 user user 450M Feb 15 13:48 large-file.iso -rw-r--r-- 1 user user 12K Feb 15 13:40 readme.txt
- Resume the interrupted download using the original URL and the --continue option.
$ wget --continue https://downloads.example.net/images/large-file.iso --2025-02-15 14:32:01-- https://downloads.example.net/images/large-file.iso Resolving downloads.example.net (downloads.example.net)... 203.0.113.42 Connecting to downloads.example.net (downloads.example.net)|203.0.113.42|:443... connected. HTTP request sent, awaiting response... 206 Partial Content Length: 1048576000 (1000M), 597688320 (570M) remaining Saving to: ‘large-file.iso’ large-file.iso 55%[===========> ] 550M 11.2MB/s ETA 45s ##### snipped #####
--continue reuses the existing local file and asks the server to send only the remaining bytes when range requests are supported.
- Check whether the remote server advertises support for resumable downloads if the resume attempt restarts from zero or fails.
$ wget --server-response --spider https://downloads.example.net/images/large-file.iso 2>&1 | grep -iE 'HTTP/|Accept-Ranges' HTTP/1.1 200 OK Accept-Ranges: bytes ##### snipped #####
The combination of a successful HTTP status and an Accept-Ranges: bytes header indicates that the server allows byte-range requests suitable for resuming.
- Use the --output-document option when the desired local filename differs from the name used on the server while still resuming from existing data.
$ wget --continue --output-document=backup-image.iso https://downloads.example.net/images/large-file.iso
Specifying --output-document keeps a consistent local naming scheme even if the provider changes filenames or query-string parameters in the URL.
- Replace an invalid partial file with a clean download when verification or provider documentation indicates that the remote file has changed.
$ rm large-file.iso $ wget https://downloads.example.net/images/large-file.iso
Removing a file with rm is irreversible, so the filename and path should be checked carefully to avoid deleting unrelated data.
- Verify that the completed file matches the expected size and checksum after the transfer finishes.
$ ls -lh large-file.iso -rw-r--r-- 1 user user 1000M Feb 15 14:34 large-file.iso $ sha256sum large-file.iso 8c1e4a5f2d7c9b3e1a04d76f5b2a1c9e0d8f6e4c3b2a1908f7e6d5c4b3a2910 large-file.iso
Checksum verification is successful when the computed hash exactly matches the reference value published by the file provider.
- Confirm that no further data needs to be fetched by rerunning the resume command and observing that nothing is downloaded.
$ wget --continue https://downloads.example.net/images/large-file.iso File ‘large-file.iso’ already there; not retrieving.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
Comment anonymously. Login not required.
