Transient network failures can interrupt an otherwise valid wget job long before the remote artifact is actually unavailable. Explicit retry settings let unattended downloads survive short outages, temporary backend errors, and flaky name resolution without requiring manual restarts.
Retry behavior is controlled by a small set of flags: --tries limits the attempt budget, --waitretry adds backoff between retries, and --retry-on-http-error or --retry-on-host-error decide which failures are treated as transient. Combined correctly, those options keep one-off hiccups from aborting the whole transfer.
Retries still need guardrails. Unlimited attempts can mask a real outage for far too long, and retrying a non-resumable partial download can waste bandwidth, so keep finite defaults by default and add --continue when preserving partial bytes matters for large files.
Steps to retry downloads automatically using wget:
- Start in a clean working directory so each retry run has one obvious destination file and log context.
$ mkdir -p ~/downloads/retry-demo $ cd ~/downloads/retry-demo
A dedicated directory makes it easier to distinguish a newly downloaded artifact from an older partial file that may need resume logic.
- Run wget with a finite retry budget and explicit transient error handling.
$ wget --tries=3 --waitretry=2 --retry-on-host-error --retry-on-http-error=503 https://downloads.example.net/files/unstable-256k.bin --2026-03-27 06:59:49-- https://downloads.example.net/files/unstable-256k.bin Connecting to downloads.example.net (downloads.example.net)|203.0.113.50|:443... connected. HTTP request sent, awaiting response... 503 Service Unavailable Retrying. --2026-03-27 06:59:50-- (try: 2) https://downloads.example.net/files/unstable-256k.bin Connecting to downloads.example.net (downloads.example.net)|203.0.113.50|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 262144 (256K) [application/octet-stream] Saving to: 'unstable-256k.bin' ... 2026-03-27 06:59:50 (172 MB/s) - 'unstable-256k.bin' saved [262144/262144]
--retry-on-http-error=503 is what turns a temporary server-side failure into a retry instead of an immediate hard stop.
- Add --continue when the download can leave a partial file that should be resumed instead of restarted.
$ wget --tries=5 --waitretry=5 --continue --retry-on-http-error=429,500,502,503,504 https://downloads.example.net/files/large-backup.tar.gz
Use --continue only when the remote server supports range requests and the URL points to a stable file.
- Reserve unlimited retries for jobs that are supervised by an external timeout, queue, or watchdog.
$ wget --tries=0 --waitretry=15 --retry-on-host-error --retry-on-http-error=429,503 https://downloads.example.net/files/nightly-build.tar.gz
--tries=0 never stops on its own, so use it only when another layer is responsible for eventually failing the job.
- Verify the final artifact before another tool consumes it.
$ ls -lh unstable-256k.bin -rw-r--r-- 1 user user 256K Mar 27 06:59 unstable-256k.bin
A valid file on disk confirms that the retry policy recovered the transfer instead of only retrying and failing silently.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
