HTTP response compression reduces transfer size for HTML, JSON, CSS, JavaScript, logs, and other text-heavy payloads, which speeds up downloads and lowers bandwidth use on slow or metered links.
In an HTTP transfer, the client advertises supported encodings in Accept-Encoding and the server replies with Content-Encoding when it sends a compressed body. In cURL, --compressed requests every content encoding that the current build can decode and then expands the response before writing the output file, so the saved body is ready for normal text tools.
Compression is usually beneficial for text responses and often pointless for files that are already compressed, such as ZIP archives, images, or video. Server and proxy behavior can also vary, and available decoders depend on how cURL was built, so it is worth checking both the response headers and the saved file when the exact byte format matters.
Related: How to save cURL output to a file
Related: How to show HTTP response headers in cURL
Steps to use compressed downloads in cURL:
- Create a clean working directory for the compressed download and header capture files.
$ mkdir -p ~/curl-compressed-download $ cd ~/curl-compressed-download
- Download a text response with --compressed and save the negotiated response headers separately.
$ curl --compressed --silent --show-error --output daily-usage-summary-2026-03-29.json --dump-header daily-usage-summary.headers https://downloads.example.net/exports/reports/daily-usage-summary-2026-03-29.json
--dump-header keeps the server metadata in a separate file, which makes it easier to compare the transfer headers with the saved body.
- Inspect the response headers to confirm that the server actually returned a compressed payload.
$ grep -iE '^(content-encoding|vary|content-type|content-length):' daily-usage-summary.headers content-type: application/json content-encoding: gzip vary: Accept-Encoding content-length: 248
Content-Encoding: gzip confirms that the server compressed the response on the wire, while Vary: Accept-Encoding shows that the resource can change based on compression negotiation.
- Check the saved file to confirm that cURL expanded the response before writing it to disk.
$ file daily-usage-summary-2026-03-29.json daily-usage-summary-2026-03-29.json: JSON text data $ sed -n '1,4p' daily-usage-summary-2026-03-29.json { "report_date": "2026-03-29", "service": "edge-ingest", "requests_ok": 18421,The saved file can be larger than the Content-Length header because the header describes the compressed transfer, while the local file contains the decompressed body.
- Trace the request and response headers together when the compression negotiation itself needs proof.
$ curl --compressed --verbose --output /dev/null https://downloads.example.net/exports/reports/daily-usage-summary-2026-03-29.json 2>&1 | grep -iE '^[><] (Accept-Encoding|content-encoding|vary):' > Accept-Encoding: deflate, gzip < content-encoding: gzip < vary: Accept-Encoding
The exact Accept-Encoding list depends on the local cURL build, so newer builds can also advertise br or zstd when those decoders are available.
- Request compressed bytes explicitly and omit --compressed when a downstream tool must receive the raw encoded stream.
$ curl --silent --show-error --header 'Accept-Encoding: gzip' --output daily-usage-summary-2026-03-29.raw.gz --dump-header daily-usage-summary.raw.headers https://downloads.example.net/exports/reports/daily-usage-summary-2026-03-29.json
Do not combine this workflow with --compressed, because cURL will decode the response before writing it and the saved file will no longer contain the original compressed bytes.
- Verify that the raw artifact is still gzip data before handing it to another program or archiving it.
$ file daily-usage-summary-2026-03-29.raw.gz daily-usage-summary-2026-03-29.raw.gz: gzip compressed data, from Unix, original size modulo 2^32 684 $ gzip -t daily-usage-summary-2026-03-29.raw.gz && echo "gzip stream ok" gzip stream ok
Keep the raw-stream path for handoff or archival cases; for normal text inspection, searching, or parsing, --compressed is the safer default because the saved file is already decoded.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
