Exporting Scrapy items to a .csv file keeps crawl results easy to sort, filter, and share with spreadsheet tools and data pipelines. CSV is lightweight and line-oriented, so it fits well for quick reviews and repeatable exports.
Scrapy writes items using its feed export system while a spider runs. When the output target ends with .csv, the built-in CSV exporter serializes each yielded item into a row and writes a header row for the selected fields.
CSV is flat, so nested lists or dictionaries are stringified and inconsistent item fields can lead to missing values across rows. Lock the column order before long runs to avoid shuffled headers, and treat overwrite mode as destructive because it replaces any existing output file.
Related: How to export Scrapy items to JSON
Related: How to enable item pipelines in Scrapy
$ cd /root/sg-work/catalog_demo
FEED_EXPORT_FIELDS = [
"name",
"price",
"url",
]
FEED_EXPORT_ENCODING = "utf-8"
Set FEED_EXPORT_ENCODING to utf-8-sig when the CSV opens with garbled characters in spreadsheet apps.
$ scrapy crawl catalog -O products.csv 2026-01-01 09:39:10 [scrapy.extensions.feedexport] INFO: Stored csv feed (6 items) in: products.csv
The output format follows the file extension, so products.csv triggers the built-in CSV exporter.
-O overwrites any existing products.csv in place.
$ head -n 5 products.csv name,price,url Starter Plan,$29,http://app.internal.example:8000/products/starter-plan.html Team Plan,$79,http://app.internal.example:8000/products/team-plan.html Enterprise Plan,$199,http://app.internal.example:8000/products/enterprise-plan.html Growth Plan,$129,http://app.internal.example:8000/products/growth-plan.html ##### snipped #####
The header should match FEED_EXPORT_FIELDS, and missing fields export as empty cells.
$ wc -l products.csv 7 products.csv
Line count includes the header row, so item rows are total - 1.