Setting concurrent requests in Scrapy controls how many downloads stay active at the same time, which changes crawl speed, memory use, and how much pressure the crawler puts on a target site.

Scrapy schedules downloads asynchronously and applies concurrency per downloader slot rather than per callback. CONCURRENT_REQUESTS caps total in-flight downloads across the crawler, while current projects created with scrapy startproject already include a per-domain concurrency line set to 1 and a one-second download delay in settings.py, so tuning usually starts by editing those existing lines and optionally adding a higher crawler-wide cap.

Higher values do not guarantee higher throughput. The configured download delay, delay randomization, AutoThrottle, spider-level custom_settings, and site-side rate limits can keep practical concurrency lower than the configured ceiling, and older projects that still use the deprecated per-IP concurrency option apply the limit per IP instead of per domain.

Steps to set concurrent requests in Scrapy:

  1. Open the Scrapy project settings file.
    $ vi simplifiedguide/settings.py

    In a default project layout, the file is usually <project_name>/settings.py.

  2. Add or update the crawler-wide and per-domain concurrency lines in settings.py.
    CONCURRENT_REQUESTS = 8
    CONCURRENT_REQUESTS_PER_DOMAIN = 4

    Leaving CONCURRENT_REQUESTS unset keeps Scrapy's current crawler-wide default of 16, while new projects already include a per-domain limit of 1 and a one-second download delay.

    If an older project still sets the deprecated per-IP concurrency option, that per-IP limit overrides the per-domain limit.

  3. Check the effective concurrency and delay values that the project is loading.
    $ scrapy settings --get CONCURRENT_REQUESTS
    8
    $ scrapy settings --get CONCURRENT_REQUESTS_PER_DOMAIN
    4
    $ scrapy settings --get DOWNLOAD_DELAY
    1

    If the delay value still prints 1, fast targets may behave closer to one active request per domain until you lower that delay or let more responses overlap.

  4. Run the spider and confirm the crawl log reports the intended concurrency overrides.
    $ scrapy crawl products -s LOG_LEVEL=INFO
    2026-04-16 06:00:11 [scrapy.crawler] INFO: Overridden settings:
    {'BOT_NAME': 'simplifiedguide',
     'CONCURRENT_REQUESTS': 8,
     'CONCURRENT_REQUESTS_PER_DOMAIN': 4,
     'DOWNLOAD_DELAY': 1,
     'LOG_LEVEL': 'INFO',
     'NEWSPIDER_MODULE': 'simplifiedguide.spiders',
     'ROBOTSTXT_OBEY': True,
     'SPIDER_MODULES': ['simplifiedguide.spiders']}
    ##### snipped #####
    2026-04-16 06:00:13 [scrapy.core.engine] INFO: Spider closed (finished)

    Seeing both concurrency keys in Overridden settings confirms the crawl loaded the project values before the spider ran.

    If a domain still behaves serially, check the delay setting, delay randomization, AutoThrottle, and any spider custom_settings before raising the limits further.