Custom spider settings keep a single Scrapy project usable across targets that need different crawl speed, logging, cache, or timeout behavior. Putting spider-specific overrides next to the spider avoids repeated edits in shared settings.py and reduces the risk of changing unrelated crawls.

Scrapy loads spider settings from the custom_settings class attribute with spider priority for that crawl. Those values override project settings from settings.py, but command-line -s overrides still take precedence, which makes custom_settings a good fit for fixed per-spider differences rather than one-off runtime experiments.

Current Scrapy releases still support custom_settings for per-spider overrides, but the upstream docs now recommend update_settings() when a spider needs to compute values or merge dictionary settings such as FEEDS. Pre-crawler settings cannot be defined per spider, and reactor settings should not vary per spider when multiple spiders run in the same process.

Steps to use custom settings in a Scrapy spider:

  1. Open the spider file that needs its own crawl behavior.
    $ vi catalogdemo/spiders/catalog.py

    In a default project layout, spider modules live under <project_name>/spiders/.

  2. Add a custom_settings dictionary as a class attribute on the spider.
    import scrapy
     
    class CatalogSpider(scrapy.Spider):
        name = "catalog"
        allowed_domains = ["catalog.example"]
        start_urls = ["https://catalog.example/products/"]
        custom_settings = {
            "CONCURRENT_REQUESTS": 4,
            "DOWNLOAD_DELAY": 1.5,
            "LOG_LEVEL": "INFO",
        }
     
        def parse(self, response):
            yield {
                "title": response.css("title::text").get(),
                "url": response.url,
            }

    Use custom_settings for fixed spider-only overrides such as DOWNLOAD_DELAY, DOWNLOAD_TIMEOUT, AUTOTHROTTLE_ENABLED, HTTPCACHE_ENABLED, or USER_AGENT. Use update_settings() instead when the spider must calculate values or merge dictionary settings.

    Command-line -s overrides still win over spider settings, and pre-crawler settings do not belong in custom_settings.

  3. Run the spider by its name value from the project root.
    $ scrapy crawl catalog
  4. Confirm the crawl starts with the spider overrides listed under Overridden settings.
    $ scrapy crawl catalog
    2026-04-16 06:32:37 [scrapy.crawler] INFO: Overridden settings:
    {'BOT_NAME': 'catalogdemo',
     'CONCURRENT_REQUESTS': 4,
     'CONCURRENT_REQUESTS_PER_DOMAIN': 1,
     'DOWNLOAD_DELAY': 1.5,
     'LOG_LEVEL': 'INFO'}
    ##### snipped #####
    2026-04-16 06:32:39 [scrapy.core.engine] INFO: Spider closed (finished)

    Seeing the spider-specific keys in Overridden settings confirms that Scrapy loaded custom_settings before the crawl started.