Can you please share your backup strategies for linux? I’m curious to know what tools you use and why?How do you automate/schedule backups? Which files/folders you back up? What is your prefered hardware/cloud storage and how do you manage storage space?
What’s a backup?
I use Borg Backup, automated with a bash script that Borg provides. A cron job runs the script at the desired frequency. I keep backups on different computers, ideally I would recommend one copy in the cloud and one copy on a local machine. Borg compresses and encrypts its backups.
Edit: I migrated a server once using the backups from this system and it worked great.
I should really cron my Borg script rather than waiting for a sinking anxiety to set it and doing backups at random intetvals
Make sure to check if it actually ran from the cron job, cron is a finnicky tool
Borg backup is gold standard, with Vorta as a very nice GUI on machines that need it. Otherwise, all my other Linux machines are running in proxmox hypervisors and have container/snapshot/vm backups regularly through proxmox backup server to another machine. All the backup data is then replicated regularly, remotely via truenas scale replication tasks.
Borg via Vorta handles the hard parts: encryption, compression, deduplication, and archiving. You can mount backup snapshots like drives, without needing to expand them. It splits archives into small chunks so you can easily upload them to your cloud service of choice.
Adding my “Me too” to Vorta/Borg. I use it with Borgbase, which I like because it’s legitimately cheap and they support Borg development. As well, you can set Borg backups with Borgbase to “append only,” which prevents ransomware or other unexpected “whoopsies” from wiping out your backup history.
I backup most of my computer every hour, but have pruning rules that make sure things don’t get too out of hand. I have a second backup that backs everything up to my NAS (using Vorta, again). This is helpful for things like my downloads folder, virtual machines, or STEAM library - things I wouldn’t want to backup over the network, but on occasion I do find myself going “whoops, I wanted that.”
I also have Vorta working on my Mom’s Macbook, then have Borgbase send me an email when there isn’t any activity for longer than a couple of days. Once I got automatic pruning working right I never had to touch this again.
Borg with Vorta’s my go to as well. Resistance is futile.
Hope.
i do not “hope”, i have faith in the lord 🙏
I plug in an external drive every so often and drag and drop parts of my home dir into it like it’s 1997. I’m not running a data center here. The boomer method is good enough and I don’t do anything important enough to warrant going all out with professional snapshot based backup solutions and stuff. And I only save personal documents, media, and custom config files. Everything else is replaceable.
yeah about the same, old coot here, I plug a USB3-SSD (encrypted with LUKS) and rsync from internal HD to this external HD. That’s it.
I do exactly this but with a little shell script that just has some
rsync -av
andmv -f
calls instead of dragging and dropping.
All my code and projects are on GitHub/codeberg.
All my personal info and photos are on proton drive.
If Linux shits itself (and it does often) who cares. I can have it up and running again in a fresh install in ten minutes.
But proton drive soaent have a linux client yet, I suppose you just upload your files there once through the web interface and don’t sync?
Personal stuff is mostly on my phone. And I’ll just sync to the computer what’s needed.
I use rsync to incrementally back up / to a separate drive, as well as a drive on another device (my server), which then packs, compresses and encrypts the latest backup of all devices daily, and uploads them to Hetzner as well as GDrive.
Not to save stuff
I too am raw-dogging my Linux install
I was talking with a techhead from the 80s about what he did when his tape drives failed and the folly that is keeping data alive on a system that doesn’t need to be. His foolproof backup storage is as follows.
- At Christmas buy a new hard drive. If Moore’s law allows, it should be double what you currently have
- Put your current backup hardrive into a SATA drive slot. Copy over backup into new hard drive.
- Write with a sharpie the date at which this was done on the harddrive. The new hard drive is your current backup.
- Place the now old backup into your drawer and forget about it.
- On New Years Day, load each of the drives into a SATA drive slot and fix any filesystem issues.
- Put them back into the drawer. Go to step 1.
Shout out to all the homies with nothing, I’m still waiting to buy a larger disk in hopes of rescuing as much data from a failing 3TB disk as I can. I got some read errors and unplugged it about 3 months ago.
Dump configs to backup drive. Pray to the machine spirit that things don’t blow up. Only update when I remember. I’m a terrible admin for my own stuff.
Thanks to you, I don’t need to answer to OP anymore👍
I’m using
rustic
, a lock-free rust-written drop-in-replacement ofrestic
, which (I’m referring torestic
and therefore in extension torustic
) supports always-encrypted, deduplicating, compressed and easy backups without you needing to worry about whether to do a full- or incremental-backup.All my machines run hourly backups of all mounted partitions to an append-only repo at borgbase. I have a file with ignore pattern globs to skip unwanted files and dirs (i.e.:
**/.cache
).While I think borgbase is ok, ther’re just using hetzner storage boxes in the background, which are cheaper if you use them directly. I’m thinking of migrating my backups to a handfull of homelabs from trusted friends and family instead.
The backups have a randomized delay of 5m and typically take about 8-9s each (unless big new files need to be uploaded). They are triggered by persistent systemd-timers.
The backups have been running across my laptop, pc and server for about 6 months now and I’m at ~380 GiB storage usage total.
I’ve mounted backup snapshots on multiple occasions already to either get an old version of a file, or restore it entirely.
There is a tool called
redu
which is likencdu
but works onrestic
/rustic
repos. This makes it easy to identify which files blow up your backup size.This is the correct way. I wish hetzner had a storage box size between the 1TB and 5TB version though.
Scuse the cut and paste, but this is something I recently thought quite hard about and blogged, so stealing my own content:
What to back up? This is a core question to ask when you start planning. I think it’s quite simply answered by asking the secondary question: “Can I get the data again?” Don’t back up stuff you downloaded from the public internet unless it’s particularly rare. No TV, no Movies, no software installers. Don’t hoard data you can replace. Do back up stuff you’ve personally created and that doesn’t exist elsewhere, or stuff that would cause you a lot of effort or upset if it wasn’t available. Letters you’ve written, pictures you’ve taken, code you authored, configurations and systems that took you a lot of time to set up and fine tune.
If you want to be able to restore a full system, that’s something else and generally dealt best with imaging – I’m talking about individual file backups here!
Backup Scenario Multiple household computers. Home linux servers. Many services running natively and in docker. A couple of windows computers.
Daily backups Once a day, automate backups of your important files.
On my linux machines, that’s things like some directories like /etc, /root, /docker-data, some shared files.
On my windows machines, then that’s some mapping data, word documents, pictures, geocaching files, generated backups and so on.
You work out the files and get an idea of how much space you need to set aside.
Then, with automated methods, have these files copied or zipped up to a common directory on an always-available server. Let’s call that /backup.
These should be versioned, so that older ones get expired automatically. You can do that with bash scripts, or automated backup software (I use backup-manager for local machines, and backuppc or robocopy for windows ones)
How many copies you keep depends on your preferences – 3 is a sound number, but choose what you want and what disk space you have. More than 1 is a good idea since you may not notice the next day if something is missing or broken.
Monthly Backups – Make them Offline if possible
I puzzled a long time over the best way to do offline backups. For years I would manually copy the contents of /backup to large HDDs once a month. That took an hour or two for a few terabytes.
Now, I attach an external USB hard drive to my server, with a smart power socket controlled by Home Assistant.
This means it’s “cold storage”. The computer can’t access it unless the switch is turned on – something no ransomware knows about. But I can write a script that turns on the power, waits a minute for it to spin up, then mounts the drive and copies the data. When it’s finished, it’ll then unmount the drive and turn off the switch, and lastly, email me to say “Oi, change the drives, human”.
Once I get that email, I open my safe (fireproof and in a different physical building) and take out the oldest of three usb Caddies. Swap that with the one on the server and put that away. Classic Grandfather/Father/Son backups.
Once a year, I change the oldest of those caddies to “Annual backup, 2024” and buy a new one. That way no monthly drive will be older than three years, and I have a (probably still viable) backup by year.
BTW – I use USB3 HDD caddies (and do test for speed – they vary hugely) because I keep a fair bit of data. But you can also use one of the large capacity USB Thumbdrives or MicroSD cards for this. It doesn’t really matter how slowly it writes, since you’ll be asleep when it’s backing up. But you do really want it to be reasonably fast to read data from, and also large enough for your data – the above system gets considerably less simple if you need multiple disks.
Error Check: Of course with automated systems, you need additional automated systems to ensure they’re working! When you complete a backup, touch a file to give you a timestamp of when it was done – online and offline. I find using “tree” to catalogue the files is worthwhile too, so you know what’s on there.
Lastly – test your backups. Once or twice a year, pick a backup at random and ensure you can copy and unpack the files. Ensure they are what you expect and free from errors.
Example of a Bash script that performs the following tasks
- Checks the availability of an important web server.
- Checks disk space usage.
- Makes a backup of the specified directories.
- Sends a report to the administrator’s email.
Example script:
#!/bin/bash # Settings WEB_SERVER="https://example.com" BACKUP_DIR="/backup" TARGET_DIRS="/var/www /etc" DISK_USAGE_THRESHOLD=90 ADMIN_EMAIL="admin@example.com" DATE=$(date +"%Y-%m-%d") BACKUP_FILE="$BACKUP_DIR/backup-$DATE.tar.gz" # Checking web server availability echo "Checking web server availability..." if curl -s --head $WEB_SERVER | grep "200 OK" > /dev/null; then echo "Web server is available." else echo "Warning: Web server is unavailable!" | mail -s "Problem with web server" $ADMIN_EMAIL fi # Checking disk space echo "Checking disk space..." DISK_USAGE=$(df / | grep / | awk '{ print $5 }' | sed 's/%//g') if [ $DISK_USAGE -gt $DISK_USAGE_THRESHOLD ]; then echo "Warning: Disk space usage exceeded $DISK_USAGE_THRESHOLD%!" | mail -s "Problem with disk space" $ADMIN_EMAIL else echo "There is enough disk space." fi # Creating backup echo "Creating backup..." tar -czf $BACKUP_FILE $TARGET_DIRS if [ $? -eq 0 ]; then echo "Backup created successfully: $BACKUP_FILE" else echo "Error creating backup!" | mail -s "Error creating backup" $ADMIN_EMAIL fi # Sending report echo "Sending report to $ADMIN_EMAIL..." REPORT="Report for $DATE\n\n" REPORT+="Web server status: $(curl -s --head $WEB_SERVER | head -n 1)\n" REPORT+="Disk space usage: $DISK_USAGE%\n" REPORT+="Backup location: $BACKUP_FILE\n" echo -e $REPORT | mail -s "Daily system report" $ADMIN_EMAIL echo "Done."
Description:
- Check web server: Uses
curl
command to check if the site is available. - Check disk space: Use
df
andawk
to check disk usage. If the threshold (90%) is exceeded, a notification is sent. - Create a backup: The
tar
command archives and compresses the directories specified in theTARGET_DIRS
variable. - Send a report: A report on all operations is sent to the administrator’s email using
mail
.
How to use:
- Set the desired parameters, such as the web server address, directories for backup, disk usage threshold and email.
- Make the script executable:
chmod +x /path/to/your/script.sh
- Add the script to
cron
to run on a regular basis:
crontab -e
Example to run every day at 00:00:
0 0 * * * /path/to/your/script.sh
One reason for moving to Nix was declarative config so at least that part of my system is a series of Nix files to build into a working setup.
…The rest… let’s just say “needs improvement” & I would like to set up a NAS.