Installing Scrapy with pip provides a repeatable way to deploy the same Python crawling stack across multiple machines while tracking releases published on PyPI.

The pip installer downloads the Scrapy distribution and its dependencies, then registers the scrapy console command as a script entry point in the active Python environment. For a user-scoped install on Linux, that script commonly lands in /home/user/.local/bin, while virtual environments place it under the environment directory.

Some distributions also provide Scrapy as an OS package, which can be tightly integrated but may lag behind PyPI. On platforms enforcing PEP 668, pip can refuse to modify the system Python and require a virtual environment or another supported install target.

Steps to install Scrapy using pip:

  1. Create a virtual environment for the Scrapy install.
    $ python3 -m venv ~/.venvs/scrapy

    On PEP 668 systems, installing into a venv avoids modifying the system Python.

  2. Activate the virtual environment.
    $ source ~/.venvs/scrapy/bin/activate

    For zsh, the same source command applies.

  3. Install Scrapy with pip inside the virtual environment.
    $ pip install scrapy
    Collecting scrapy
      Downloading scrapy-2.13.4-py3-none-any.whl.metadata (4.4 kB)
    Collecting cryptography>=37.0.0 (from scrapy)
      Downloading cryptography-46.0.3-cp311-abi3-manylinux_2_34_aarch64.whl.metadata (5.7 kB)
    Collecting cssselect>=0.9.1 (from scrapy)
      Downloading cssselect-1.3.0-py3-none-any.whl.metadata (2.6 kB)
    ##### snipped #####
    Installing collected packages: pydispatcher, zope-interface, w3lib, urllib3, typing-extensions, queuelib, pycparser, pyasn1, protego, packaging, lxml, jmespath, itemadapter, idna, filelock, defusedxml, cssselect, constantly, charset_normalizer, certifi, automat, attrs, requests, pyasn1-modules, parsel, incremental, hyperlink, cffi, twisted, requests-file, itemloaders, cryptography, tldextract, service-identity, pyopenssl, scrapy
    Successfully installed attrs-25.4.0 automat-25.4.16 certifi-2025.11.12 cffi-2.0.0 charset_normalizer-3.4.4 constantly-23.10.4 cryptography-46.0.3 cssselect-1.3.0 defusedxml-0.7.1 filelock-3.20.1 hyperlink-21.0.0 idna-3.11 incremental-24.11.0 itemadapter-0.13.0 itemloaders-1.3.2 jmespath-1.0.1 lxml-6.0.2 packaging-25.0 parsel-1.10.0 protego-0.5.0 pyasn1-0.6.1 pyasn1-modules-0.4.2 pycparser-2.23 pydispatcher-2.0.7 pyopenssl-25.3.0 queuelib-1.8.0 requests-2.32.5 requests-file-3.0.1 scrapy-2.13.4 service-identity-24.2.0 tldextract-5.3.1 twisted-25.5.0 typing-extensions-4.15.0 urllib3-2.6.2 w3lib-2.3.1 zope-interface-8.1.1
  4. Run scrapy to confirm the command is available.
    $ scrapy
    Scrapy 2.13.4 - no active project
    
    Usage:
      scrapy <command> [options] [args]
    
    Available commands:
      bench         Run quick benchmark test
      fetch         Fetch a URL using the Scrapy downloader
      genspider     Generate new spider using pre-defined templates
      runspider     Run a self-contained spider (without creating a project)
      settings      Get settings values
      shell         Interactive scraping console
      startproject  Create new project
      version       Print Scrapy version
      view          Open URL in browser, as seen by Scrapy
    
      [ more ]      More commands available when run from project directory
    
    Use "scrapy <command> -h" to see more info about a command