Websites specify crawling rules in robots.txt to guide well-behaved crawlers. By default, wget respects these restrictions. However, certain testing or archival scenarios require ignoring robots.txt to access restricted paths.
By using --ignore-robots or adjusting the configuration, wget will disregard these directives. While convenient for certain tasks, this should be done ethically and with respect for server resources.
Ignoring robots.txt may violate site policies and can result in blocking or other consequences. Consider potential impacts before disabling these constraints.
Use this option responsibly and ensure compliance with the website’s terms of service and legal requirements.
Steps to ignore robots.txt in Wget:
- Open the terminal.
- Use the --ignore-robots option followed by the website URL.
$ wget --ignore-robots https://www.example.com/
This will bypass any restrictions specified in the website’s robots.txt file.
- To make wget ignore robots.txt in all future sessions, add the option to the configuration file.
$ echo "robots = off" >> ~/.wgetrc
This will configure wget to permanently ignore robots.txt by default in all future sessions.
- Run the wget command again without specifying the --ignore-robots option to verify the configuration is applied.
$ wget https://www.example.com/
The download will proceed while ignoring the robots.txt restrictions as set in the configuration file.

Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
Comment anonymously. Login not required.