I have a disk for local backups (that is the only purpose of that disk). I was wondering what would make it last longer:
- Keep it mounted to my server permanently (current solution)
- Keep it unmounted most of the time, mount it when I’m going to do a backup (either daily or every 3 days, I don’t mind changing that) and unmounting after the backup is done.
What would be the best strategy?
If you want your hard drive to last, it’s important to reduce spin-ups/spin-downs. Mounting isn’t really important.
It may be tempting to save power by spinning down your disks the moment they’re no longer mounted (which some disks do by themselves, though that can often be turned off). However, keeping the disk going between writes may actually be beneficial for its service life.
I think you won’t ruin your drive by spinning it up daily. Mounting and unmounting aren’t direct causes for the drive to spin up/down, though, this is normally based on simple activity. For this reason, it’s probably best to write your backup scripts in such a way that once the backup starts, there’s always something to write to disk. Don’t touch a file to start the backup, then do a deep folder scan, then start copying, then stop and do another scan, but scan once and do a long copying operation, especially if you’ve bought one of those “power efficient” drives.
It’d be wise to make your backup disk redundant. In a perfect world, you have three copies of each file, spread across at least two physical locations. In the real world, that would probably translate to “back up to a file and then upload a(n encrypted) copy to a cloud server”. Don’t just backup to a local NAS, because when your house burns down, your server and your NAS will both be gone.
If you don’t want to bother with cloud servers (i.e. when your internet has a data cap), I would recommend adding redundancy by using multiple hard drives in a RAID configuration, so that when your drive does die, it won’t take all of your data with it. If you do set up RAID, it’s worth ordering the drives from different stores to try and get drives from different batches. Drives produced in a single batch have an annoying tendency to die around the same time, so when your drive eventually dies, a drive from the same batch my die before you have a chance to replace the broken one and get your RAID array back to full reliability.