Automatic backups in Linux reduce recovery time after accidental deletion, filesystem damage, ransomware, or full host loss by keeping a second copy of important data on another system. A scheduled off-host copy matters most when the source machine is unavailable or cannot be trusted long enough for a manual export.
This workflow uses rsync over SSH to copy one directory tree to a backup host on a fixed schedule. rsync -a preserves normal file metadata, the delete behavior removes files from the backup when they no longer exist in the source, and a trailing slash on the source path copies the directory contents instead of creating an extra top-level directory on the backup host.
These steps target a current Ubuntu or Debian system that already has rsync, the OpenSSH client, and a working user crontab. The backup host must already accept key-based SSH logins for the backup account and also have rsync installed in its normal command path. This method fits ordinary file trees such as documents, exports, or application content, but live databases and virtual machine disks still need application-aware snapshots before a file-level copy runs safely.
$ ssh -o BatchMode=yes backupuser@backup.example.net 'printf "backup-login-ok\n"' backup-login-ok
BatchMode=yes makes ssh fail instead of waiting for a password, passphrase, or host-key confirmation, which is the behavior needed for unattended cron runs.
If the first connection still asks to trust the host key, complete that one interactive login before relying on the scheduled job.
$ ssh backupuser@backup.example.net 'mkdir -p ~/backups/documents && chmod 700 ~/backups/documents && ls -ld ~/backups/documents' drwx------ 2 backupuser backupuser 4096 Apr 14 03:41 /home/backupuser/backups/documents
$ mkdir -p ~/.local/bin ~/.local/state
#!/bin/sh set -eu PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin SOURCE_DIR="$HOME/Documents/" REMOTE_USER="backupuser" REMOTE_HOST="backup.example.net" REMOTE_DIR="backups/documents/" REMOTE_DEST="$REMOTE_USER@$REMOTE_HOST:$REMOTE_DIR" exec rsync -a --delete --itemize-changes --human-readable \ -e "ssh -o BatchMode=yes" \ "$SOURCE_DIR" \ "$REMOTE_DEST"
Keep the trailing slash on SOURCE_DIR when the remote directory should receive the contents of $HOME/Documents/ instead of an extra Documents directory level.
The backup host also needs rsync installed because rsync starts a matching receiver process over SSH on the remote side.
Check the source and destination paths carefully before running any rsync command that includes --delete, because the remote tree is pruned to match the source tree.
$ chmod 700 ~/.local/bin/automatic-backup.sh
$ ~/.local/bin/automatic-backup.sh .d..tp..... ./ <f+++++++++ archive.txt <f+++++++++ example.txt <f+++++++++ notes.txt
The transfer listing shows what rsync created or updated during the run. The initial transfer usually lists each copied file with a +++++++++ marker.
$ ssh backupuser@backup.example.net 'ls -lh ~/backups/documents' total 12K -rw-r--r-- 1 backupuser backupuser 8 Apr 14 03:41 archive.txt -rw-r--r-- 1 backupuser backupuser 8 Apr 14 03:41 example.txt -rw-r--r-- 1 backupuser backupuser 6 Apr 14 03:41 notes.txt
$ crontab -e
The same per-user crontab syntax works on other Linux distributions, though the scheduler service is commonly named crond instead of cron on RHEL-like systems.
SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin MAILTO="" 0 0 * * * $HOME/.local/bin/automatic-backup.sh >> $HOME/.local/state/automatic-backup.log 2>&1
Explicit script and log paths keep the job working even though cron starts commands with a smaller environment than an interactive shell.
$ crontab -l SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin MAILTO="" 0 0 * * * $HOME/.local/bin/automatic-backup.sh >> $HOME/.local/state/automatic-backup.log 2>&1
Reviewing other user and system cron locations is covered in How to list cron jobs in Linux.
$ tail -n 20 ~/.local/state/automatic-backup.log .d..t...... ./ <f+++++++++ daily-proof.txt
Creating a small test file in the source directory before the scheduled run is an easy way to force a visible log entry the first time the job fires.
$ ssh backupuser@backup.example.net 'ls -lh ~/backups/documents' total 16K -rw-r--r-- 1 backupuser backupuser 8 Apr 14 04:13 archive.txt -rw-r--r-- 1 backupuser backupuser 12 Apr 14 04:13 daily-proof.txt -rw-r--r-- 1 backupuser backupuser 8 Apr 14 04:13 example.txt -rw-r--r-- 1 backupuser backupuser 6 Apr 14 04:13 notes.txt