Setting the Scrapy log level keeps routine crawls readable and makes it easier to spot retries, download failures, and spider errors before a run finishes.
Scrapy sends crawl messages through Python's logging system and filters them with the LOG_LEVEL setting. A value saved in the project settings.py file applies to every spider in that project, while a -s LOG_LEVEL=… command-line override has higher priority for a single run.
Leaving LOG_LEVEL unset keeps the current default DEBUG level. INFO is usually a better everyday default because it keeps startup, shutdown, and crawl statistics visible, while WARNING hides routine progress and DEBUG can quickly fill logs with request, middleware, and retry details during long crawls.
Steps to set the log level in Scrapy:
- Open the Scrapy project settings file.
$ vi catalog_demo/settings.py
The settings module created by scrapy startproject is usually named after the project, such as catalog_demo/settings.py.
- Add or update the LOG_LEVEL setting with the level needed for normal crawls.
LOG_LEVEL = "INFO"
Supported levels are CRITICAL, ERROR, WARNING, INFO, and DEBUG, and the default in current Scrapy releases remains DEBUG.
A temporary one-run override still works with scrapy crawl catalog -s LOG_LEVEL=DEBUG when deeper troubleshooting is needed without editing settings.py again.
Leaving a project at DEBUG during long or scheduled crawls can grow log files quickly and expose request URLs, redirect flow, or extra debug lines from other enabled logging options.
- Run the spider from the project directory and confirm the startup log shows the expected setting.
$ scrapy crawl catalog 2026-04-16 05:58:01 [scrapy.utils.log] INFO: Scrapy 2.15.0 started (bot: catalog_demo) 2026-04-16 05:58:02 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'catalog_demo', 'LOG_LEVEL': 'INFO', ##### snipped ##### 2026-04-16 05:58:04 [scrapy.core.engine] INFO: Spider closed (finished)If LOG_LEVEL does not appear under Overridden settings, the value was saved in the wrong settings file or the crawl is running against a different Scrapy project.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
