ZFS isn’t a backup, but it is a gateway drug
Damn that’s good
I get it, esp. in a professional environment.
But “Schrödinger’s data” rubs me the wrong way. The point OOP’s making is not a question of whether the data is there or not, it’s a question of whether you can restore a botched system with a few commands and in a realistic amount of time.
Case in point: I (private person, private system) never needed to fully restore, knock on wood. But the data is there - I have (manually) restored single files or directories on a few occasions.
it’s a question of whether you can restore a botched system with a few commands and in a realistic amount of time.
A few years ago my employer was the victim of randsomware. We’re speaking here about a massive network and all sorts of databases and services build on top of those, spanning decades and many different technologies. Basicly several thousand employees and a decade long focus on working digital and automation. Data restoration was not an issue. I haven’t heard of anyone losing data.
However, restarting all the services was not as easy. Many of these depended on each other and there were some circular dependencies that have grown organicily over the years. Took about two months to restore core functionality (mostly SAP and email) and many more months to restore all sorts of support services that were required for normal day-to-day work. Two years after the incident the last applications were back online.
If the data is present but difficult to restore, it’s annoying. You might need to spend a few days fixing stuff.
If the data is gone, it’s devastating and can bankrupt a company. On a personal level it’s the same as having all your photos destroyed in a fire. And backups not containing the right data are very common.
Adding another link, perhaps not equivalent:
I use it to backup media. If the backup roots are your drives, restoring is as easy as copy-paste, merge folders and wait.
Edit: actually, you may be able to just reverse the direction of the sync. I’m not sure.
The next day, the novice’s disk crashed. Three days later, the novice was still reinstalling software.
I laugh in NixOS
Okay, fair, upgrading and clean installs take a while. Specially on slow internet connections. But it doesnt really compile much, we have cache.nixos.org with prebuilt binaries for most packages.
These kinds of relentless posts finally got me to write a script that verifies all my backed up files using md5 checksums.
Verifying the files are there in your backup is only, like 10% of verifying that it’s a real, usable backup.
The important question is: can you successfully restore those files from the backup? Can you successfully put them back where they’re supposed to be after losing your primary copy?
How do you test this without risking your primary copy
I specifically stated that I verify the file content via md5 hash. And I keep original directory structure, so yes if i need to restore these I can.
Edit: no idea what there is to downvote here. It was definitely weird to have an md5 checking script that took days to develop and confirm working as expected and which takes days to run on several TBs of files, dismissed as simply “checking that the files are there”. No, it checks that the bytes present in the backed up copy match those in the original. You know, what an md5 checksum is.
And when the restoration of that data fails?
Are you being willfully ignorant or obstinate? Or do you not understand the concept that even with the data there, restoration of that data can fail in many ways?
A couple of times I needed to restore sites from backup, it failed. Not because the data wasnt there. Heh
Having the data is useless when the restoration process fails, which it can do due numerous reasons.
Are you being willfully ignorant or obstinate?
No one has explained why proving the data can be read end to end and matches the original is somehow lacking. Including you.
Probably because it isn’t lacking. For a home user who doesn’t want to lose their files, this is more than sufficient. Especially given that I have two local backups and a cloud one. None of which is exactly cheap.
Yes computers fail in many ways. What exactly are you people trying to accomplish here? Just give me anxiety? Do you have 14 TB of free storage space to lend me that I can use to do the full process of re-copying the backed up data to? …
Are you… just talking about stuff like pictures and videos and important documents? I mean, I would have thought the context was clear that that’s not really what’s being discussed. But if not, then sure, if you just have files backed up, then all you need to worry about is making sure you have enough copies of that as you need to not lose it.
Hmm. I’d better explain that.
Anywhere you have data that exists in one place, it is a matter of time before it dies. Who knows how long it’ll be, but it will eventually die.
If you have data in two places, then when it dies in one of those places, as long as it also hasn’t died in the other place, you have one copy and it will eventually die unless you replicate it somewhere else.
And many people find that when they go to read those burnded discs or read that backup external drive - oops, it’s damaged or dead. And then that data is gone.
So for unimportant things, a single backup somewhere is probably fine. But is that backup in your house with your computer that it’s also on? If your house burns, those two places are gone and your data is gone. Is that worth the tiny risk? Up to you. You know how much yo ucare about your data.
If you really want to make sure something valuable like important documents and family pictures, then ideally you want at least one copy offsite. If it’s important, it’s no bad thing to have two copies of it offsite along with perhaps one backup locally so it’s convenient. While you don’t need ten copies of data, it’s surprising how quickly 1-2 copies can go bad at the same time, or one goes bad and you don’t replace it and another goes bad and… quickly you run the risk of data loss.
For a home user who doesn’t want to lose their files,
That’s not the topic at hand, which one might’ve been able to tell from context clues.
two local backups and a cloud one.
That is a pretty good minimal setup. Not disparaging, that’s better than probably 95% if not more like 99% of people do.
Just give me anxiety?
No, you’re the one in a conversation that’s really not about your type of situation.
We’re talking about businesses who have servers - internet servers, internal servers. These run software. They have databases with largre amounts of data. They have programs that have lots of settings, configured in various ways. Servers set up to run services on the LAN and/or WAN and/or across the internet.
On your home computer, you can reinstall Windows, install Office, install Adobe, all the other software you use. And you can take the annoying time to re-customize everything to get it set up to your liking. Then copy all your documents over. You won’t have everything ready-to-go unless you use a fancy backup and restore method (that starts to touch on the subject being discussed here - that restore is not guaranteed unless you’ve tested it. It’s amazing how often that goes wrong), but it’s okay, you have time.
In a corporate environment, if something breaks and you need to restore that data and software, you need it up and running ASAP.
Now, you’d think it would be as simple as getting the hardware, installing the OS, installing the software, and restoring the data - but that’s not necessarily the case. Not the same version of the software? Data formatting might’ve changed. Settings might’ve changed. Does every version of everything work together? Underlying pieces f the system are different? Might cause things to break.
I won’t get into the technical details beyond that, but the point is that we’re not talking about just some pics and docs.
So that’s th egenesis of the misunderstandings here. It’s a wholly different topic than what you’re dealing with.
But yeah, for you, you’ve got a good backup system going. I personally have two different cloud providers for the data I want to keep the most, but that’s not all the pics and such, just for a subset of it.
Are you sure you have all the files required? Are they restored with the right permissions and metadata automatically?
Are you sure you have all the files required?
How could I possibly be sure of that? Obviously I’ve tried my best to backup everything I would ever need. For many reasons I cannot backup every single file so I’ve made the best decisions around that I know how.
Are they restored with the right permissions and metadata automatically?
Nothing about it is automatic, by design. Doesn’t need to be. And permissions aren’t something I particularly care about since there aren’t multiple users.
I’m backing up and verifying ~ 14 TB of files and have taken great pains to ensure I’m doing everything right.
Any idea why I got downvoted? Also…why the quizzing?
You check if you backed up everything correctly, and if the process works by restoring the backup and confirming they work.
But you do you, in the end it’s your neck on the line.
My home files are not business critical infrastructure. I’m taking several steps further than any normie would take. Keeping two backups locally, confirming their byte content, sending a subset of the files to a cloud service.
To read your comments here it seems you think I’m extremely cavalier and reckless… Because I haven’t recopied 14 TB back to their original locations to ensure that… What, copying the files works? Reading the full contents of each file and comparing to the original is somehow lacking? I don’t have 14 TB in additional storage lying around to test that… Copying is a still a thing?
It’s not like I lose a billion dollars if I lose some photos. Which again, I’ve gone to great lengths to keep safe.
Does the newly set up environment exactly match the previous? Same software versions?
Backup fire drill tomorrow!
Blake3(blak3?) is where it’s at
?

Also it must come from the back region of France. Otherwise it’s just sparkling archive.
is it pronounced beaucoup?
Yes it frence for raid 0
It still is in France, it’s like color and colour
Ooh sparkling archive actually sounds really fancy, I’ll start using that
One thing I emphasize in every training I do is that you do not have backups until you know exactly how long it will take to restore.
That way you can tell your boss it’ll take three times as long and be hailed as a miracle worker, as Scotty intended.
Do people actually do that? Because that would be funny
I see you’ve never worked in corporate IT
Yes, because otherwise you never get a break.
I 100% started doing this years ago and work is far less stressful. Temper their expectations, because it’s rare for any boss or company to treat you fairly and honestly.
What if my backup is just files and there’s nothing to restore?
Like say I take my existing drives, full of totally working media, and duplicate them, use the originals as a backup and the new drives as the active.
Does that count as a backup? No restoration involved.
In the spirit of this thread: no.
Recovering with the backup should put you back to an operational state equivalent to when the backup was taken.
I.e. if you’ve restored some files, but something is still not working then the backup failed its purpose.
E.g. the timestamps on the files might be important, do they need to be stamped with the time of the backup or the time of the restore?
Sure, if my active drives died after this swap, and I had to restore from the old, now backup, drive, I’d be back at the operational state I was at the time of the backup.
That tracks.
It still doesn’t run anything tho. It’s just a drive. It doesn’t house an os or anything, just files that aren’t restricted in any way.
IMHO there is no point backing up an OS drive, just rebuild it*.
Data is the important thing to back up because you usually can’t regenerate it.
* the corollary here is that you’ve backed up the configuration required to rebuild the OS.
I wouldn’t, I keep all of my data separate from my OS drive entirely so I can reformat or install a new OS whenever I feel like… nasty old habit from bootleg windows 7 well beyond its age, when reformatting every 6 months was good hygiene, before I found Linux… but gave me great data management insight.
Do you know how to transfer the files back if your OS has completely failed?
Sure, nearly everything is on a separate drive from the OS. I don’t put much on the OS drive on any of my computers unless it needs to run there and that’s easy to reinstall. Easy to fix things that way.
No it is not. Just use cheksum. Like a normal person…
Cool.
Still doesnt mean you can boot from itChatGPT told me to do
sudo sha256sum /dev/sda1 > /dev/sda1
So. Is this wrong? I thought it backs up the data
Yeah we’re going to see much more of this moving forward. Yesterday i installed Linux for a friend and they asked about fixing problems. I told them to always look at the date & compatibility when they search for solutions. They then volunteered: “and I guess I can always ask ChatGPT, it’s pretty good with these things”. I grunted non-committally.
Or that it’s complete.
I just run a script that runs a bunch of rsync commands. So I guess every run kinda confirms the backup is functional. I have no use for versioned backups, nor could I afford the hard drive space necessary (thanks Sam)
it’s a fair argument but it’s also bullshit if you’re following the process and practices that you used when you tested your backup
lots of my job is backups and verification of the backups
Bold of you to assume people/companies test backups more than once.
Case in point: I once got instructed to “enable EBS snapshots” for customer deployments to meet a new backup requirement. Disaster recovery was a completely different feature we only kind of got to a couple years later and afaik, remains manual to this day.
that’s fair and I agree but it’s not a true maxim
it’s a good principal but I hear it a lot so it’s a thing I get annoyed about because it’s directed at me even though I have the receipts and proven record that it’s not a fact
An untested disaster recovery plan is wishful thinking
Has anyone actually had failed backup restoration before? It’s been a meme forever, but in my ~15 years of IT, I’ve never seen a backup not restore properly.
Absolutely. Used to work at a small MSP. Got ultra unlucky in that we got chosen as the rest case target for a zero day that leveraged our Remote Support tools so our own systems and all of our client systems that were online got hit with ransomware in a very short time frame.
Some clients had local backups to Synology boxes and those worked ok thankfully. However all the rest had backups based on Hyper-V. The other local copy was on a second windows server that also got hit so the local copies didn’t help. They did also have a remote copy which wasn’t encrypted.
So all good right? Just pull the remote backup copy and apply that… Yea every time we had ever used the service before had either been single servers that physically died and took disks along on the death or just file level restores.
Those all worked fine. Still sounds like not a problem right? Nope. We found both that a couple of the larger servers had backups that didn’t actually have everything in spite of being VM images. No idea how their software even was able to do that.
And the worse part was that their data transfer rate was insanely slow. About 10mbps. Not that per server or par client. Nope that was the max export rate across everything. It would have taken literally months to restore everything at that rate.
I hate to say it but yes we did in fact pay the ransom and the. Had to fight for several days going through getting things decrypted. Then going through months of reinstalling fresh copies and/or putting in new servers. Also changing our entire stack at the same time. Shockingly we handled it well enough we lost no clients. Largely because we were able to prove we couldn’t have known ahead of time.
If you read through all that I’ll even say the vendors name. It was StorageCraft. I now have a deep hate for them.
Also one more is that with the old Apple HFS+ filesystem based time machine backups it would sometimes report as a valid self checked backup even if it had corruption. It would do this as long as some self check confirmed that it could fix the corruption during a restore. However if you tried directly browsing through the time machine backups it would have files that couldn’t be read, unless again you did a full system restore with it.
Nearly lost my wife’s semester ending before finding it worked that way.
I can’t confirm it but seems it is fully fixed with APFS and might be one of the reasons they spent the effort to make that transition.
I’ve had an IT Career for about as long as you. I’ve had 2 memorable restore failures and got real lucky both times.
The first was a ransomware incident, and the onsite backup was not hit, but it was corrupt. Thankfully, the client had been using a 3-2-1 strategy, and the off-site one was fine.
The second was a situation where a failed update rendered a client’s RDS unbootable. This time, they didn’t have an on-site backup and the off-site one was corrupt. This time I happened to get immensely lucky in that there was no real data on that RDS, so I was able to spin up a fresh one, and install their LOB app and all was good.
We now test that all backups are stable every 6 months.
Yep. At one place I worked, we did a big off-site disaster recovery exercise every year.
Most of the time it went fine, but there were multiple years where a restore didn’t work due to an issue with one or more tapes. Either the data and/or indexes couldn’t be read, or the tape physically failed during the restore.
Backups aren’t backups unless they’re tested.
in my ~15 years of IT, I’ve never seen a backup not restore properly.
I remember Outlook backups failing like nothing else during the restore process 25 years ago.
Which was fucked because it would take 2 weeks to rebuild only to find out it didn’t work.
Fun video. Many backup options failed iirc.
Banger of a video! Thanks!
I’ve made mistakes before, and had that panic realization set in. I can only imagine the feeling this guy got once he realized what he just did. Nightmare fuel.












