

Buddy, I was merely restating the parent comment’s sarcastic take with even more sarcasm.
Buddy, I was merely restating the parent comment’s sarcastic take with even more sarcasm.
deleted by creator
If it’s the only solution it stands to reason that the problem would be solved after applying it. So there would be no other solution needed afterwards. Therefore it would be the last solution, the solution after the penultimate one.
The machine that was last installed in 2014 is Ubuntu LTS. It’s been upgraded through all the LTS releases since then. Currently on 22.04 with the free Ubuntu Pro enabled. I use a mix of Ubuntu LTS and Debian stable on other machines. For example my laptop is on Debian 12. Debian has been the most reliable OS and community for over 30 years and I believe it’ll still be around 30 years from now, if we haven’t destroyed ourselves. 😂
Yeah, it seems counterproductive to ditch FOSS in the name of self-sufficiency. If it were about that, assembling an army of software people to learn and contribute to important FOSS codebases would be much more productive in my opinion. It feels like Harmony Next is about something else. Perhaps some wholesale insurance. Or someone’s plans grandeur.
That kinda makes sense at this stage. If you spend time understanding what those commands do, you’d understand how the system works, and most importantly how to not fuck it up. Keep in mind there’s a lot of misinformation and bad practices in guides out there. People who bare know more than you feel confident to share snippets without warning. Ten or twenty years ago much fewer people had experience with Linux and most people confident enough to write were technical people that knew what they were talking about. Destructive misinformation was less.
But yeah when you learn, the need or urge to reinstall disappears. I stopped reinstalling in 2014. Took me 9 years to unfuck my Windows brain and understand enough to not shoot myself in the feet. Main machine hasn’t been reinstalled since then. That’s with replacing multiple main boards, switching AMD > Intel > AMD, changing SSDs, going from single SSD to mdraid, increasing in size over time, etc.
Sir, this is not Windows.
This is a very accu explanation. ☝️
I also don’t like System76 hardware, but they’re doing this software work that they’re hoping to recoup with hardware sales. If this becomes a good replacement for GNOME, for me it would be worth paying whatever they’d make from a laptop. But I ain’t buying their laptop because I don’t like it and I don’t need it. So I’m gonna give them the difference somehow. 😂
If this becomes a good replacement for GNOME I’d pay the profit margin of a System76 laptop for it.
What you’re experiencing isn’t hyperinflation. Hyperinflation is more like when a loaf of bread is $1 today, $2 a month after and $10-100 by the end of the year. Grown up in country during hyperinflation.
Right so I guess the question of 3 is whether it means 3 backups or 3 copies. If we take it literally - 3 copies, then it does protect from user error only. If 3 backups, it protects against hardware failure too.
E: Seagate calls them copies and explicitly says the implementer can choose how the copies are distributed across the 2 media. The woodchipper scenario would be handled by the 2 media requirement.
Hm I wonder why snapshots wouldn’t satisfy 3. Copies on the same disk like /file, /backup1/file, /backup2/file should satisfy 3. Why wouldn’t snapshots be equivalent if 3 doesn’t guard against filesystem or hardware failure? Just thinking and curious to see opinion.
Does this make sense?
If Raid is backup, then Unraid is?
Try ZFS send if you have ZFS on the other side. It’s insane. No file IO, just snap and time for the network transfer of the delta.
Every hour. Could do it more frequently if needed.
It depends on how resource intensive the backup process is.
Consider an 800GB Immich instance.
Using Duplicity or rsync takes 1 hour per backup. 99% of the time is spent in traversing the directory structure and checking which files have changed. 1% is spent into transferring the difference to the backup. Any backup system that operates on top of the file system would take this much. In addition, unless you’re using something that can take snapshots of the filesystem, you have to stop Immich during the backup process in order to prevent backing up an invalid app state.
Using ZFS send on the other hand (with syncoid) takes less than 5 seconds to discover the differences and the rest of the time is spent on the data transfer, at 100MB/s in my case. Since ZFS send is based on snapshots, I don’t have to stop the service either.
When I used Duplicity to backup, I would backup once week because the backup process was long and heavy on the disk array. Since I switched to ZFS send, I do it once an hour because there’s almost no visible impact.
I’m now in the process of migrating my laptop to ZFS on root in order to be able to utilize ZFS send for regular full system backups. If successful, eventually I’ll move all my machines to ZFS on root.
What’s the second B stand for?
That’s the sad part. They only get the first half right. 😂
I’m writing here to give my sincere applause to this effort.