Saving responses from cURL into files keeps long HTTP payloads, logs, and binary downloads manageable. Persisted output simplifies debugging APIs, comparing responses over time, and passing data into other tools. Using file-based output instead of viewing everything on screen avoids lost details when buffers scroll.
cURL writes the response body to standard output and error details to standard error by default. Shell redirection operators such as >, >>, 2>, and 2>&1 capture these streams into regular files, while options such as --output and --remote-name instruct cURL to write directly to a named file. Combining these mechanisms provides flexible control over where response data and diagnostic messages end up.
Careful handling of overwrite versus append semantics prevents accidental data loss when using redirection. Binary responses such as archives or images must be written to files instead of terminals to avoid corruption. Choosing sensible filenames, locations, and permissions ensures that saved cURL output remains accessible for later inspection and automation without exposing sensitive information.
Related: How to resume a download with cURL
Related: How to use download compression in cURL
Steps to save cURL output to a file:
- Open a shell in a directory with write permissions for the current user.
$ whoami root $ pwd /work $ ls avatar.png backup.tar.gz bin caddy certs ##### snipped #####
- Store the response body from a request into a new text file using standard output redirection.
$ curl "https://example.com/" > page.txt % Total % Received % Xferd Average Speed Time Time Time Current ##### snipped ##### $ ls -lh page.txt -rw-r--r-- 1 root root 1.3K Dec 21 10:03 page.txt
Using > truncates any existing destination file before writing new data, so previous contents of page.txt cannot be recovered from the shell after this command finishes.
- Append additional response data to the same file using >> to avoid overwriting existing content.
$ curl "https://example.com/changes" >> page.txt % Total % Received % Xferd Average Speed Time Time Time Current ##### snipped ##### $ tail -n 3 page.txt - Added new endpoint for status checks. - Updated response schema for /data. - Fixed minor UI typos.
The >> operator adds new bytes to the end of an existing file and creates the file if it does not already exist.
- Capture only error messages from a failing request into a dedicated log file using standard error redirection.
$ curl --silent --show-error "https://invalid.example.test" 2> curl-error.log $ cat curl-error.log curl: (6) Could not resolve host: invalid.example.test
File descriptor 2 represents standard error, so 2> redirects only diagnostic output while leaving the response body, if any, on the terminal.
- Combine response data and error messages into a single troubleshooting log.
$ curl --silent --show-error "https://invalid.example.test" > curl-combined.log 2>&1 $ wc -l curl-combined.log 1 curl-combined.log
The 2>&1 syntax sends standard error to the same destination as standard output, merging both streams into one file.
- Use --output to let cURL write directly to a specific filename without shell redirection.
$ curl --output logo.jpg "https://example.com/logo.jpg" % Total % Received % Xferd Average Speed Time Time Time Current ##### snipped ##### $ ls -lh logo.jpg -rw-r--r-- 1 root root 338 Dec 21 10:04 logo.jpg
The --output option keeps filenames inside the cURL invocation and works reliably in scripts that avoid shell metacharacters.
- Use --remote-name to save the response using the filename advertised in the URL path.
$ curl --remote-name "https://example.com/files/report-2024.txt" % Total % Received % Xferd Average Speed Time Time Time Current ##### snipped ##### $ ls -lh report-2024.txt -rw-r--r-- 1 root root 24K Dec 21 10:04 report-2024.txt
--remote-name mirrors the basename from the remote URL, which is convenient when downloading many files that should retain their original names.
- Verify that all expected files exist and contain non-zero sizes as confirmation that cURL output has been saved successfully.
$ ls -lh page.txt curl-error.log curl-combined.log logo.jpg report-2024.txt -rw-r--r-- 1 root root 55 Dec 21 10:03 curl-combined.log -rw-r--r-- 1 root root 55 Dec 21 10:03 curl-error.log -rw-r--r-- 1 root root 338 Dec 21 10:04 logo.jpg -rw-r--r-- 1 root root 1.4K Dec 21 10:03 page.txt -rw-r--r-- 1 root root 24K Dec 21 10:04 report-2024.txt
Successful saving is indicated by the presence of the listed files with realistic non-zero sizes and by the ability to open them with appropriate text or binary viewers.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
Comment anonymously. Login not required.
