
deleted by creator
deleted by creator
Yeah, for sure. SCSI died when SAS emerged, and that’s been basically 20 years now.
Any SCSI stuff left laying around is going to be literally a decade+ old and yeah, unless you have a VERY specific need that requires it (which really is just trying to get another few years out of already installed gear), it’s effectively dead and shouldn’t be bought for anything other than paperweights or for a coffee table.
MSA30
Unless my memory fails, that’s billion year old SCSI drives.
Do not buy billion year old SCSI drives, enclosures for SCSI drives, or uh, well, anything like that.
It’s going to use an enormous amount of power, perform slower than a single modern drive, and be prone to failure because well, it’s a billion years old.
That’s not something you want.
For bandwidth intensive stuff I like wholesale internet’s stuff.
The hardware is very uh, old, but the network quality is great since they run an ix. And it’s unmetered too so it’s probably sufficient.
Universiality, basically: almost everyone, everywhere has an email account, or can find one for free. As well as every OS and every device has a giant pile of mail clients for you to chose from.
And I mean, email is a simple tech stack and well understood and reliable: I host an internal mail server for notifications and updates and shit, and it’s rapid, fast, and works perfectly.
It’s only when you suddenly need to email someone OTHER than your local shit that it turns to complete shit.
Even following ‘beginner’ tutorials is hit or miss
It’s gotten worse than it even used to be, because more than half the “tutorials” I’ve run across are clearly AI written and basically flat out wrong.
Of course, they’re ALSO the “answers” that get pushed by Bing/Google so even if you run into someone who is willing to follow documentation, they’re going to get served worthless slop.
One thing I will give arch is that if there’s a wiki entry for something, it’s at least written by a human and is actually accurate which is more than I’ve found ANYWHERE else.
And more fun, lots of laptops have really goofy routing. I’ve got one where the DP alt mode on the USB-C ports are on the dGPU, but the HDMI ports are on the iGPU. And the internal panel is on the iGPU unless you switch it to be on the dGPU because yay mux.
Why? I don’t know. Too much meth while laying the board out or something I guess.
10940X
“They say”, but they’re right. Ryzen chips do have worse idle power usage, but you’re talking about 10w or so, at most.
And uh, if you were looking at an X-series CPU, I can’t see how that 10w is a dealbreaker, because you were already looking at a shockingly inefficient chip.
Everything is temporary, except for that 25 year old system that’s keeping everything running and can’t be replaced because nobody knows how or why it works just that if you touch it everything falls over.
I don’t recall exactly, but it’s more like days rather than hours. At some point the instances will mark you as down, and then stop trying to federate with you, so there’s a hard limit but it’s fairly generous and not especially aggressive.
I found the PR for the queue, and it mentions retries but doesn’t seem to mention exact timing, at least to my quick read. ( https://github.com/LemmyNet/lemmy/pull/3605 )
Yeah. There’s a retry queue, which does expire after a certain time period, but for a short outage that’s how it’d work.
Debian stable is great: it’s, well, stable. It’s well supported, has an extremely long support window, and the distro has a pretty stellar track record of not doing anything stupid.
It’s very much in the install-once-and-forget-it category, just gotta do updates.
I run everything in containers for management (but I’m also running something like 90 containers, so a little more complex than your setup) and am firmly of the opinion that, unless you have a compelling reason to NOT run something in a container, just use the containerized version.
I’m the same way. If it’s split license, then it’s a matter of when and not if it’s going to have some MBA come along and enshittify it.
There’s just way, way too much prior experience where that’s what eventually will happen for me to be willing to trust any project that’s doing that, since the split means they’re going to monetize it, and then have all the incentive in the world to shit all over the “free” userbase to try to get them to convert.
You keep cloning and configuring shit on a Win10 instance because you can’t find the key?
That’s silly and you should just stop doing that: https://github.com/massgravel/Microsoft-Activation-Scripts
There you go! One less problem to deal with.
Snapraid parity is offline, so that’s not strictly accurate.
You build the parity, and then until you do a sync/scrub/rebuild they don’t do shit, so there’s no reason to keep them spun up.
If the drive are slow spinning up, then this is probably not a fatal concern, but there’s zero details here.
See, IBM (with OS/2) and Microsoft (with Windows 2.x and 3.x) were cooperating initially.
Right-ish, but I’d say there was actually a simpler problem than the one you laid out.
The immediate and obvious thing that killed OS/2 wasn’t the compatibility layer, it was driven by IBM not having any drivers for any hardware that was not sold by IBM, and Windows having (relatively) broad support for everything anyone was likely to actually have.
Worse, IBM pushed for support for features that IBM hardware support didn’t support to be killed, so you ended up with a Windows that supported your hardware, the features you wanted, and ran on cheaper hardware fighting it out with an OS/2 that did none of that.
IBM essentially decided to, well, be IBM and committed suicide in the market, and didn’t really address a lot of the stupid crap until Warp 3, at which point it didn’t matter and was years too late, and Windows 95 came swooping in shortly thereafter and that was the end of any real competition on the desktop OS scene for quite a while.
That could probably work.
Were it me, I’d build a script that would re-hash and compare all the data to the previous hash as the first step of adding more files, and if the data comes out consistent, I’d copy the files over, hash everything again, save the hash results elsewhere and then repeat as needed.
The format is the tape in the drive, or the disk or whatever.
Tape existed 50 years ago: nothing modern and in production can read those tapes.
The problem is, given a big enough time window, the literal drives to read it will simply no longer exist, and you won’t be able to access even non-rotted media because of that.
As for data integrity, there’s a lot of options: you can make a md5 sum of each file, and then do it again and see if anything is different.
The only caveat here is you have to make sure whatever you’re using to make the checksums gets stored somewhere that’s not JUST on the drive because if the drive DOES corrupt itself, and your only record of the “good” hashes is on the drive, well, you can’t necessarily trust those hashes either.
So, 50 years isn’t a reasonable goal unless you have a pretty big budget for this. Essentially no media is likely to survive that long and be readable unless they’re stored in a vault, under perfect climate controlled conditions. And even if the media is fine, finding an ancient drive to read a format that no longer exists is not a guaranteed proposition.
You frankly should be expecting to have to replace everything every couple of years, and maybe more often if your routine tests of the media show it’s started rotting.
Long term archival storage really isn’t just a dump it to some media and lock it up and never look at ever again.
Alternately, you could just make someone else pay for all of this, and shove all of this to something like Glacier and make the media Amazon’s problem. (Assuming Amazon is around that long and that nothing catches fire.)
Ah HP printer drivers, my favorite form of self-inflicted malware.
My favorite HP sucks story happened many a year ago. The boss’s shitty HP multi-function POS died, and we got him a nice Brother instead, and then went to uninstall the drivers.
Somehow, and the reason for this is totally unknown to anyone other than HP engineers, the driver ‘uninstaller’ decided that today’s hilarity would be that it was going to uninstall… everything.
After about 15 minutes of the drive churning away I got concerned, rebooted it, and found that nearly 75% of everything on it had been deleted by the uninstaller.
No fucking idea, but that was a fun thing to explain and then fix.