Long-running transfers over flaky links often stall or fail just before completion, turning multi-gigabyte downloads into repetitive busywork. Automatic retry behaviour in wget allows large images, backups, and archives to survive short outages, DNS hiccups, or overloaded mirrors without manual restarts.
As a non-interactive downloader, wget uses command-line options to control retry limits, wait intervals, and resume behaviour across connection attempts. The --tries option bounds how many failures are tolerated, --waitretry inserts a delay between retries, and --continue resumes partial files from the last written byte rather than discarding existing data.
Retry configuration influences both reliability and upstream load, because excessive retries or very short wait intervals can overwhelm mirrors or trigger automated blocking; moderate defaults combined with resume support provide a safer baseline for unattended jobs handling large files.
Steps to retry downloads automatically using wget:
- Open a terminal in the directory intended to store the downloaded file.
$ cd ~/Downloads
- Start a download with a finite retry limit using --tries=3 and --retry-on-http-error=503 so that transient failures trigger additional attempts instead of aborting immediately.
$ wget --tries=3 --retry-on-http-error=503 https://httpbin.org/status/503 --2025-12-21 08:42:54-- https://httpbin.org/status/503 Resolving httpbin.org (httpbin.org)... 172.17.0.10 Connecting to httpbin.org (httpbin.org)|172.17.0.10|:443... connected. HTTP request sent, awaiting response... 503 Service Unavailable Retrying. --2025-12-21 08:42:55-- (try: 2) https://httpbin.org/status/503 Reusing existing connection to httpbin.org:443. HTTP request sent, awaiting response... 503 Service Unavailable Retrying. --2025-12-21 08:42:57-- (try: 3) https://httpbin.org/status/503 Reusing existing connection to httpbin.org:443. HTTP request sent, awaiting response... 503 Service Unavailable Giving up.
A value such as --tries=3 usually balances robustness against the risk of wasting time on a host that remains unavailable.
- Add --waitretry=1 so that each failed attempt pauses briefly before retrying instead of hammering the remote server.
$ wget --tries=3 --waitretry=1 --retry-on-http-error=503 https://httpbin.org/status/503 --2025-12-21 08:43:03-- https://httpbin.org/status/503 Resolving httpbin.org (httpbin.org)... 172.17.0.10 Connecting to httpbin.org (httpbin.org)|172.17.0.10|:443... connected. HTTP request sent, awaiting response... 503 Service Unavailable Retrying. --2025-12-21 08:43:04-- (try: 2) https://httpbin.org/status/503 Reusing existing connection to httpbin.org:443. HTTP request sent, awaiting response... 503 Service Unavailable Retrying. --2025-12-21 08:43:05-- (try: 3) https://httpbin.org/status/503 Reusing existing connection to httpbin.org:443. HTTP request sent, awaiting response... 503 Service Unavailable Giving up.
Very short retry intervals or high retry counts can resemble abusive traffic patterns and may lead to throttling or IP blacklisting by cautious mirrors.
- Combine retries with --continue so that partially downloaded files resume rather than restart when a connection breaks.
$ wget --tries=3 --waitretry=1 --continue https://downloads.example.com/files/largefile.iso --2025-12-21 08:43:56-- https://downloads.example.com/files/largefile.iso Resolving downloads.example.com (downloads.example.com)... 172.17.0.10 Connecting to downloads.example.com (downloads.example.com)|172.17.0.10|:443... connected. HTTP request sent, awaiting response... 206 Partial Content Length: 5242880 (5.0M), 4194304 (4.0M) remaining [application/octet-stream] Saving to: 'largefile.iso' [ skipping 1000K ] 1000K ,,,,,,,,,, ,,,,,,,,,, ,,,,...... .......... .......... 20% 1009M 0s ##### snipped #####When a matching partial file exists in the current directory, --continue appends new data starting from the last completed byte instead of discarding previous progress.
- Enable indefinite retry behaviour only when appropriate by passing --tries=0 together with a conservative wait interval.
$ wget --tries=0 --waitretry=1 --retry-on-http-error=503 https://httpbin.org/status/503 --2025-12-21 08:43:43-- https://httpbin.org/status/503 Resolving httpbin.org (httpbin.org)... 172.17.0.10 Connecting to httpbin.org (httpbin.org)|172.17.0.10|:443... connected. HTTP request sent, awaiting response... 503 Service Unavailable Retrying. --2025-12-21 08:43:44-- (try: 2) https://httpbin.org/status/503 Reusing existing connection to httpbin.org:443. HTTP request sent, awaiting response... 503 Service Unavailable Retrying. ##### snipped #####
Unlimited retries can loop for hours or days if a host remains offline, so this pattern fits controlled environments with monitoring rather than casual desktop use.
- Confirm that the completed file matches expectations by checking size or verifying a known checksum after the retried transfer finishes.
$ ls -lh largefile.iso -rw-r--r-- 1 alex alex 5.0M Dec 21 07:22 largefile.iso $ sha256sum largefile.iso c036cbb7553a909f8b8877d4461924307f27ecb66cff928eeeafd569c3887e29 largefile.iso
A matching hash from the download source or documentation provides stronger assurance of integrity than file size comparisons alone.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
Comment anonymously. Login not required.
