Kahibaro
Discord Login Register

3.6.5 Automating backups

Why automate backups?

Manual backups are easy to forget, inconsistent, and error-prone. Automation ensures:

In this chapter, the focus is on how to automate backups using common Linux tools, building on the backup methods and tools you already know.


Designing an automated backup strategy

Before writing any automation, decide:

Automation will encode these decisions into repeatable processes.


Common approaches to automating backups

You’ll see the same pattern repeatedly:

  1. A backup command or script (e.g. rsync, tar, snapshot tool)
  2. A scheduler (e.g. cron, systemd timers)
  3. Logging and notifications (log files, email, monitoring hooks)
  4. Retention management (cleanup of old backups)

You can mix and match these components depending on your needs.


Using cron to schedule backups

cron is the traditional UNIX scheduler. It runs commands at specific times.

Basic cron workflow

  1. Create or choose a backup script (see next sections).
  2. Ensure it’s non-interactive (no prompts).
  3. Add it to the user’s crontab or a system crontab.

Edit the current user’s crontab:

crontab -e

A typical daily backup at 02:30 might look like:

30 2 * * * /usr/local/sbin/backup-home.sh >> /var/log/backup-home.log 2>&1

Key points:

Cron time syntax refresher

Each cron line has 5 time fields, then the command:

# ┌─ minute (0–59)
# │ ┌─ hour (0–23)
# │ │ ┌─ day of month (1–31)
# │ │ │ ┌─ month (1–12)
# │ │ │ │ ┌─ day of week (0–7, 0/7 = Sunday)
# │ │ │ │ │
# * * * * *  command

Examples:

For more advanced scheduling, systemd timers can be preferable.


Using systemd timers for backups

On systems with systemd, timers are an alternative to cron, with better integration into logs and service management.

Basic structure

You’ll typically create:

Example: automatic /home backup using rsync.

1. Create the service

Create /etc/systemd/system/backup-home.service:

[Unit]
Description=Backup /home to external drive
[Service]
Type=oneshot
ExecStart=/usr/local/sbin/backup-home.sh

2. Create the timer

Create /etc/systemd/system/backup-home.timer:

[Unit]
Description=Daily /home backup timer
[Timer]
OnCalendar=daily
Persistent=true
Unit=backup-home.service
[Install]
WantedBy=timers.target

Key points:

3. Enable and start the timer

sudo systemctl daemon-reload
sudo systemctl enable --now backup-home.timer

Check status:

systemctl list-timers
systemctl status backup-home.service
journalctl -u backup-home.service

Building a backup script for automation

Automation is easier if you encapsulate your backup logic in a script. Here’s a pattern for a simple automated backup using rsync.

Example: Automated home directory backup to external disk

Assume:

Create /usr/local/sbin/backup-home.sh:

#!/bin/bash
set -euo pipefail
SRC="/home"
DEST_BASE="/mnt/backup-home"
DATE="$(date +'%Y-%m-%d_%H-%M-%S')"
DEST="${DEST_BASE}/${DATE}"
LATEST="${DEST_BASE}/latest"
LOG_DIR="/var/log/backup"
LOG_FILE="${LOG_DIR}/backup-home.log"
mkdir -p "${DEST}" "${LOG_DIR}"
exec >> "${LOG_FILE}" 2>&1
echo "===== $(date) : Starting backup ====="
# Optional: pre-check that destination is mounted
if ! mountpoint -q "${DEST_BASE}"; then
    echo "ERROR: Destination ${DEST_BASE} is not mounted. Aborting."
    exit 1
fi
# Run rsync
rsync -aAXH --delete \
    --exclude='.cache/' \
    --exclude='Downloads/' \
    "${SRC}/" "${DEST}/"
# Update 'latest' symlink
ln -sfn "${DEST}" "${LATEST}"
echo "===== $(date) : Backup completed successfully ====="

Then make it executable:

sudo chmod +x /usr/local/sbin/backup-home.sh

You can now schedule this script with cron or a systemd timer.

Key automation aspects demonstrated:

Automating tar-based archives

You may prefer compressed archives for certain data (e.g. configs, databases).

Example: Daily `/etc` backup with rotation

Create /usr/local/sbin/backup-etc.sh:

#!/bin/bash
set -euo pipefail
BACKUP_DIR="/var/backups/etc"
DATE="$(date +'%Y-%m-%d')"
ARCHIVE="${BACKUP_DIR}/etc-${DATE}.tar.xz"
RETENTION_DAYS=30
mkdir -p "${BACKUP_DIR}"
# Create archive
tar --xattrs --acls -cJf "${ARCHIVE}" /etc
# Remove old archives
find "${BACKUP_DIR}" -type f -name 'etc-*.tar.xz' -mtime +"${RETENTION_DAYS}" -delete

Then schedule daily at 01:15 with cron:

15 1 * * * /usr/local/sbin/backup-etc.sh >> /var/log/backup-etc.log 2>&1

This shows how retention can be baked into the automation using find -mtime.


Automating rsync to remote servers

Automated offsite backups are critical. For rsync to run unattended over SSH, you typically use SSH keys.

Steps overview

  1. Generate an SSH key pair for the backup user (no passphrase, or handle passphrases via an agent).
  2. Install the public key on the remote backup server.
  3. Write a backup script that runs rsync over SSH.
  4. Schedule the script with cron or systemd timers.

Example script: push to remote server

Create /usr/local/sbin/backup-home-remote.sh:

#!/bin/bash
set -euo pipefail
SRC="/home"
REMOTE_USER="backup"
REMOTE_HOST="backup.example.com"
REMOTE_BASE="/backups/hostname-home"
DATE="$(date +'%Y-%m-%d_%H-%M-%S')"
REMOTE_DIR="${REMOTE_BASE}/${DATE}"
LATEST_LINK="${REMOTE_BASE}/latest"
# rsync with SSH
rsync -aAXH --delete \
    -e "ssh -o BatchMode=yes" \
    "${SRC}/" "${REMOTE_USER}@${REMOTE_HOST}:${REMOTE_DIR}/"
# Update 'latest' symlink on remote
ssh -o BatchMode=yes "${REMOTE_USER}@${REMOTE_HOST}" \
    "ln -sfn '${REMOTE_DIR}' '${LATEST_LINK}'"

Then schedule in cron or a systemd timer. Ensure network and firewall settings allow SSH access.


Using snapshot-based tools in automation

If you use filesystem features (e.g. Btrfs subvolume snapshots, LVM snapshots, or snapshot-enabled backup tools like restic, borgbackup), automation usually looks like:

  1. Create snapshot (fast, consistent point-in-time view).
  2. Back up snapshot using rsync, tar, or a snapshot-aware tool.
  3. Prune old snapshots/backups based on a retention policy.
  4. Remove snapshot when finished if it’s temporary.

Example pattern for Btrfs (simplified):

#!/bin/bash
set -euo pipefail
SRC_SUBVOL="/@home"
SNAP_DIR="/.snapshots/home"
DATE="$(date +'%Y-%m-%d_%H-%M-%S')"
SNAP="${SNAP_DIR}/${DATE}"
# Create readonly snapshot
btrfs subvolume snapshot -r "${SRC_SUBVOL}" "${SNAP}"
# Back up the snapshot (for example, with rsync)
rsync -aHAX "${SNAP}/" /mnt/backup-home-snapshots/"${DATE}/"
# Optional: remove old snapshots/backups using find or snapshot tools

The details of snapshot creation and pruning depend on your filesystem and are covered in snapshot-focused chapters, but the automation pattern is the same: script + scheduler + retention.


Managing retention and cleanup

Automated backups accumulate quickly. Good automation includes automatic cleanup:

Common techniques:

  find /var/backups/db -type f -name 'db-*.tar.xz' -mtime +14 -delete

Consider different retention for:

Logging, monitoring, and notifications

Automation is only useful if you know when it fails.

Logging

  /usr/local/sbin/backup.sh >> /var/log/backup.log 2>&1

Health checks

Simple email alert example

If your system is configured with mail sending, you can send a basic notification:

if ! rsync ...; then
    echo "Backup failed on $(hostname) at $(date)" \
        | mail -s "Backup FAILURE" admin@example.com
    exit 1
fi

Security considerations in automated backups

Automation must not weaken security:

Testing automated backups and restores

Automation is not complete until you verify that restores work.

A simple automated check could be a periodic script that:

  1. Lists the most recent backup directory or file.
  2. Checks its age.
  3. Fails and alerts if no recent backup is found.

Putting it all together: example backup automation plan

For a small server, a practical automated setup might be:

All of this is driven by small, focused scripts and the standard Linux scheduling and logging infrastructure, turning backup from an occasional manual task into a reliable, verifiable process.

Views: 126

Comments

Please login to add a comment.

Don't have an account? Register now!