Automatic backups in Linux reduce recovery time after accidental deletion, filesystem damage, ransomware, or full host loss by keeping a second copy of important data on another system. A scheduled off-host copy matters most when the source machine is unavailable or cannot be trusted long enough for a manual export.
This workflow uses rsync over SSH to copy one directory tree to a backup host on a fixed schedule. rsync -a preserves normal file metadata, the delete behavior removes files from the backup when they no longer exist in the source, and a trailing slash on the source path copies the directory contents instead of creating an extra top-level directory on the backup host.
These steps target a current Ubuntu or Debian system that already has rsync, the OpenSSH client, and a working user crontab. The backup host must already accept key-based SSH logins for the backup account and also have rsync installed in its normal command path. This method fits ordinary file trees such as documents, exports, or application content, but live databases and virtual machine disks still need application-aware snapshots before a file-level copy runs safely.
Steps to automate Linux backups with rsync over SSH:
- Open a terminal on the source Linux system as the account that owns the files to be backed up.
- Confirm that the source account can connect to the backup host without any prompt.
$ ssh -o BatchMode=yes backupuser@backup.example.net 'printf "backup-login-ok\n"' backup-login-ok
BatchMode=yes makes ssh fail instead of waiting for a password, passphrase, or host-key confirmation, which is the behavior needed for unattended cron runs.
If the first connection still asks to trust the host key, complete that one interactive login before relying on the scheduled job.
- Create the destination directory on the backup host and restrict it to the backup account.
$ ssh backupuser@backup.example.net 'mkdir -p ~/backups/documents && chmod 700 ~/backups/documents && ls -ld ~/backups/documents' drwx------ 2 backupuser backupuser 4096 Apr 14 03:41 /home/backupuser/backups/documents
- Create local directories for the reusable script and log file.
$ mkdir -p ~/.local/bin ~/.local/state
- Save the backup script with explicit source and destination paths.
- ~/.local/bin/automatic-backup.sh
#!/bin/sh set -eu PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin SOURCE_DIR="$HOME/Documents/" REMOTE_USER="backupuser" REMOTE_HOST="backup.example.net" REMOTE_DIR="backups/documents/" REMOTE_DEST="$REMOTE_USER@$REMOTE_HOST:$REMOTE_DIR" exec rsync -a --delete --itemize-changes --human-readable \ -e "ssh -o BatchMode=yes" \ "$SOURCE_DIR" \ "$REMOTE_DEST"
Keep the trailing slash on SOURCE_DIR when the remote directory should receive the contents of $HOME/Documents/ instead of an extra Documents directory level.
The backup host also needs rsync installed because rsync starts a matching receiver process over SSH on the remote side.
Check the source and destination paths carefully before running any rsync command that includes --delete, because the remote tree is pruned to match the source tree.
- Make the script executable so it can be called directly and from cron.
$ chmod 700 ~/.local/bin/automatic-backup.sh
- Run the script manually once to confirm that the transfer works before scheduling it.
$ ~/.local/bin/automatic-backup.sh .d..tp..... ./ <f+++++++++ archive.txt <f+++++++++ example.txt <f+++++++++ notes.txt
The transfer listing shows what rsync created or updated during the run. The initial transfer usually lists each copied file with a +++++++++ marker.
- Check the destination on the backup host to confirm that the expected files are present.
$ ssh backupuser@backup.example.net 'ls -lh ~/backups/documents' total 12K -rw-r--r-- 1 backupuser backupuser 8 Apr 14 03:41 archive.txt -rw-r--r-- 1 backupuser backupuser 8 Apr 14 03:41 example.txt -rw-r--r-- 1 backupuser backupuser 6 Apr 14 03:41 notes.txt
- Open the current user's crontab in the default editor.
$ crontab -e
The same per-user crontab syntax works on other Linux distributions, though the scheduler service is commonly named crond instead of cron on RHEL-like systems.
- Add the schedule and environment lines, then save the file. The example below runs the backup every day at midnight and appends output to a dedicated log.
SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin MAILTO="" 0 0 * * * $HOME/.local/bin/automatic-backup.sh >> $HOME/.local/state/automatic-backup.log 2>&1
Explicit script and log paths keep the job working even though cron starts commands with a smaller environment than an interactive shell.
- List the installed crontab to confirm that the scheduled backup is present exactly as intended.
$ crontab -l SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin MAILTO="" 0 0 * * * $HOME/.local/bin/automatic-backup.sh >> $HOME/.local/state/automatic-backup.log 2>&1
Reviewing other user and system cron locations is covered in How to list cron jobs in Linux.
- After the first scheduled window, inspect the log file to confirm that the unattended run completed and transferred any new files.
$ tail -n 20 ~/.local/state/automatic-backup.log .d..t...... ./ <f+++++++++ daily-proof.txt
Creating a small test file in the source directory before the scheduled run is an easy way to force a visible log entry the first time the job fires.
- Check the destination again to confirm that the scheduled run copied the new file to the backup host.
$ ssh backupuser@backup.example.net 'ls -lh ~/backups/documents' total 16K -rw-r--r-- 1 backupuser backupuser 8 Apr 14 04:13 archive.txt -rw-r--r-- 1 backupuser backupuser 12 Apr 14 04:13 daily-proof.txt -rw-r--r-- 1 backupuser backupuser 8 Apr 14 04:13 example.txt -rw-r--r-- 1 backupuser backupuser 6 Apr 14 04:13 notes.txt
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
