Installing Scrapy with pip is the quickest way to get the current crawler CLI, project templates, and shell tools without waiting for a distribution package to catch up. A dedicated Python environment also keeps crawler dependencies isolated from unrelated automation or system tools.

Current upstream documentation still supports a direct pip install scrapy workflow and recommends a virtual environment on all platforms. That approach lets pip resolve Scrapy plus its core dependencies such as Twisted, parsel, lxml, and pyOpenSSL, then expose the scrapy command from the environment.

The steps below use a POSIX shell with Python 3.10 or later available. Installing into a distro-managed or system Python can be blocked by an externally managed environment policy or break OS-owned packages, so the safest path is to create a virtual environment first and keep any platform-specific compiler fixes limited to that environment.

Steps to install Scrapy using pip:

  1. Open a terminal with Python 3.10 or later available.
  2. Create a dedicated virtual environment for Scrapy.
    $ python3 -m venv ~/.venvs/scrapy

    Upstream Scrapy documentation recommends a virtual environment instead of a system-wide pip install.

  3. Activate the virtual environment.
    $ source ~/.venvs/scrapy/bin/activate

    The shell prompt usually changes to show the environment name.

  4. Install Scrapy with pip from the active environment.
    (scrapy) $ python -m pip install scrapy
    Collecting scrapy
      Downloading scrapy-2.15.0-py3-none-any.whl.metadata (4.3 kB)
    Collecting cryptography>=37.0.0 (from scrapy)
      Downloading cryptography-46.0.7-...whl.metadata (5.7 kB)
    Collecting lxml>=4.6.4 (from scrapy)
      Downloading lxml-6.0.4-...whl.metadata (3.1 kB)
    ##### snipped #####
    Installing collected packages: attrs, cryptography, lxml, parsel, pyopenssl, scrapy, twisted, w3lib
    Successfully installed attrs-26.1.0 cryptography-46.0.7 lxml-6.0.4 parsel-1.11.0 pyopenssl-26.0.0 scrapy-2.15.0 twisted-25.5.0 w3lib-2.4.1

    If pip starts building dependencies such as lxml or cryptography and fails, install the platform-specific compiler or development packages first, then repeat the install inside the same virtual environment.

  5. Check the installed Scrapy version.
    (scrapy) $ scrapy version
    Scrapy 2.15.0

    Using python -m pip keeps the install tied to the active interpreter when multiple Python versions are present.

  6. Run scrapy outside a project directory to confirm the CLI is available.
    (scrapy) $ scrapy
    Scrapy 2.15.0 - no active project
    
    Usage:
       scrapy <command> [options] [args]
     Available commands:
    
      bench         Run quick benchmark test
      fetch         Fetch a URL using the Scrapy downloader
      genspider     Generate new spider using pre-defined templates
      runspider     Run a self-contained spider (without creating a project)
      settings      Get settings values
      shell         Interactive scraping console
      startproject  Create new project
      version       Print Scrapy version
      view          Open URL in browser, as seen by Scrapy
    
       [ more ]      More commands available when run from project directory
    
     Use "scrapy <command> -h" to see more info about a command

    The no active project message is expected until a crawler is created or the command is run from an existing project directory.