Long-running transfers over flaky links often stall or fail just before completion, turning multi-gigabyte downloads into repetitive busywork. Automatic retry behaviour in wget allows large images, backups, and archives to survive short outages, DNS hiccups, or overloaded mirrors without manual restarts.

As a non-interactive downloader, wget uses command-line options to control retry limits, wait intervals, and resume behaviour across connection attempts. The --tries option bounds how many failures are tolerated, --waitretry inserts a delay between retries, and --continue resumes partial files from the last written byte rather than discarding existing data.

Retry configuration influences both reliability and upstream load, because excessive retries or very short wait intervals can overwhelm mirrors or trigger automated blocking; moderate defaults combined with resume support provide a safer baseline for unattended jobs handling large files.

Steps to retry downloads automatically using wget:

  1. Open a terminal in the directory intended to store the downloaded file.
    $ cd ~/retry
  2. Start a download with a finite retry limit using --tries=3 and --retry-on-http-error=503 so that transient failures trigger additional attempts instead of aborting immediately.
    $ wget --tries=3 --retry-on-http-error=503 https://www.example.com/status/503
    --2026-01-10 05:11:54--  https://www.example.com/status/503
    Resolving www.example.com (www.example.com)... 203.0.113.50
    Connecting to www.example.com (www.example.com)|203.0.113.50|:443... connected.
    HTTP request sent, awaiting response... 503 Service Unavailable
    Retrying.
    
    --2026-01-10 05:11:55--  (try: 2)  https://www.example.com/status/503
    Connecting to www.example.com (www.example.com)|203.0.113.50|:443... connected.
    HTTP request sent, awaiting response... 503 Service Unavailable
    Retrying.
    
    --2026-01-10 05:11:57--  (try: 3)  https://www.example.com/status/503
    Connecting to www.example.com (www.example.com)|203.0.113.50|:443... connected.
    HTTP request sent, awaiting response... 503 Service Unavailable
    Giving up.

    A value such as --tries=3 usually balances robustness against the risk of wasting time on a host that remains unavailable.

  3. Add --waitretry=1 so that each failed attempt pauses briefly before retrying instead of hammering the remote server.
    $ wget --tries=3 --waitretry=1 --retry-on-http-error=503 https://www.example.com/status/503
    --2026-01-10 05:12:01--  https://www.example.com/status/503
    Resolving www.example.com (www.example.com)... 203.0.113.50
    Connecting to www.example.com (www.example.com)|203.0.113.50|:443... connected.
    HTTP request sent, awaiting response... 503 Service Unavailable
    Retrying.
    
    --2026-01-10 05:12:02--  (try: 2)  https://www.example.com/status/503
    Connecting to www.example.com (www.example.com)|203.0.113.50|:443... connected.
    HTTP request sent, awaiting response... 503 Service Unavailable
    Retrying.
    
    --2026-01-10 05:12:03--  (try: 3)  https://www.example.com/status/503
    Connecting to www.example.com (www.example.com)|203.0.113.50|:443... connected.
    HTTP request sent, awaiting response... 503 Service Unavailable
    Giving up.

    Very short retry intervals or high retry counts can resemble abusive traffic patterns and may lead to throttling or IP blacklisting by cautious mirrors.

  4. Combine retries with --continue so that partially downloaded files resume rather than restart when a connection breaks.
    $ wget --tries=3 --waitretry=1 --continue https://downloads.example.net/files/largefile.iso
    --2026-01-10 05:12:17--  https://downloads.example.net/files/largefile.iso
    Resolving downloads.example.net (downloads.example.net)... 203.0.113.50
    Connecting to downloads.example.net (downloads.example.net)|203.0.113.50|:443... connected.
    HTTP request sent, awaiting response... 206 Partial Content
    Length: 524288 (512K), 393216 (384K) remaining [application/x-iso9660-image]
    Saving to: 'largefile.iso'
    
            [ skipping 100K ]
       100K ,,,,,,,,,, ,,,,,,,,,, ,,,,,,,,.. .......... .......... 29%  588M 0s
    ##### snipped #####

    When a matching partial file exists in the current directory, --continue appends new data starting from the last completed byte instead of discarding previous progress.

  5. Enable indefinite retry behaviour only when appropriate by passing --tries=0 together with a conservative wait interval.
    $ wget --tries=0 --waitretry=1 --retry-on-http-error=503 https://www.example.com/status/503
    --2026-01-10 05:12:27--  https://www.example.com/status/503
    Resolving www.example.com (www.example.com)... 203.0.113.50
    Connecting to www.example.com (www.example.com)|203.0.113.50|:443... connected.
    HTTP request sent, awaiting response... 503 Service Unavailable
    Retrying.
    
    --2026-01-10 05:12:28--  (try: 2)  https://www.example.com/status/503
    Connecting to www.example.com (www.example.com)|203.0.113.50|:443... connected.
    HTTP request sent, awaiting response... 503 Service Unavailable
    Retrying.
    ##### snipped #####

    Unlimited retries can loop for hours or days if a host remains offline, so this pattern fits controlled environments with monitoring rather than casual desktop use.

  6. Confirm that the completed file matches expectations by checking size or verifying a known checksum after the retried transfer finishes.
    $ ls -lh largefile.iso
    -rw-r--r-- 1 user user 512K Jan 10 04:05 largefile.iso
    
    $ sha256sum largefile.iso
    07854d2fef297a06ba81685e660c332de36d5d18d546927d30daad6d7fda1541  largefile.iso

    A matching hash from the download source or documentation provides stronger assurance of integrity than file size comparisons alone.