

I do that for data I want to persist, but which I don’t care about backing up (eg caches)
I do that for data I want to persist, but which I don’t care about backing up (eg caches)
I can outsource things like ddos protection to my cdn provider, but that would still be just kinda hoping I didn’t have any attackable surface I didn’t think of prelaunch.
In that case, I wonder if your money would be better spent on contracting a security review. If you’re worried about unknown attack surface, I’m not sure that funding organized crime to rent a botnet would help. Botnet operators rely on you to tell them what to attack, so you’re unlikely to discover anything new here. Better to hire a professional and get a fresh opinion.
Is this something you’re self hosting for fun, or is it some kind of business?
If you’re running web services for a business, you should look into existing load test tooling/infrastructure. Some of it can be fully managed, or other solutions might have a degree of setup involved (eg spinning up worker nodes in AWS or whatever). The hard part is designing your load test to match IRL traffic patterns, but once you have that down you can confidently answer questions about service scalability.
A load test is not a DDoS test. Load tests tell you how much legitimate traffic your services can take. DDoS consists of illegitimate traffic which may not correspond to what your web services expect.
Usually you don’t test your systems for something like a DDoS. You would instead set up DDoS protection through a CDN (content delivery network) to shield yourself and let someone else handle the logistics of blocking unwanted load. It’s a really hard problem to solve.
Depending on what you want to learn, running your own DDoS is unlikely to be very instructive. Most “DDoS as a service” networks are not going to tell their customers how anything works, they just take your bitcoin and send some traffic where you tell them.
I realize that this is a humor post, and not necessarily the right place to provide advice, but never underestimate the power of adding a Q&A meeting to someone else’s calendar. Someone doesn’t want to make time to explain mystery tool? Well you just made it for them. Usually I try and be polite by asking before I arrange something.
Probably more war:
If I don’t have to worry about nuclear retaliation, maybe I’m very confident in engaging in war. After all, my nukes will still work, and everyone else’s won’t.
Imagine the nuclear armed countries who are enemies of another nation with a bigger military. North Korea vs USA, Pakistan vs India. In these cases, nuclear weapons are a deterrence against the stronger opponent. Without this, the country with a stronger conventional force may be more likely to they think they’ll win a war unscathed.
Whether or not you’re wasting your time in college is only something you can answer. However, there definitely are jobs out there for junior software devs right now. If economic outlooks improve, I’d expect demand for juniors to rise also.
Anecdote: I saw stats shared on social media by a CS professor at my former college. Enrollment for their classes is way down this year, when “back in my day” they were packed. Make of it what you will, but it’s possible young people might no longer be seeing software development as an easy career to get into. That could make it a more attractive prospect for someone who’s in it for more than just money.
I’m not going to say “Don’t learn gentoo next” but if you’re already well versed in Nix or setting up a base arch install, I feel like the only thing Gentoo will teach is “How long does it really take to compile Firefox from source?”
At my last job, there was no planning of work/projects. Like, there was a general plan of “We need feature X by Q3 and here’s what it should do”, but nothing about breaking work down into smaller units or prioritizing different tasks.
The manager would drop an email: “Hey, can you do …” and that was it. Now it’s another thing to throw down the waterfall. Big surprise, the same bastard would harp about how the project was underperforming!
There was nothing RESTful or well planned about this API’s interfaces, and the work to do something like that would have been nontrivial. Management never prioritized the work.
At a prior job, our API load balancers would swallow all errors and return an HTTP 200 response with no content. It was because we had one or two clients with shitty integrations that couldn’t handle anything but 200. Of course, they brought in enough money that we couldn’t ever force them to fix it on their end.
Are you able to independently confirm that the domaincheck container is listening to the right port? Eg netstat -tunlp
on the host
Are you able to block it from your user settings page? There’s a tab for adding communities/users to your blocklist.
I use it whenever I want to block a community, but I don’t want to visit their page.
There are definitely UI inconsistencies across devices, especially smart TVs. Jellyfin on Firestick looks different from Jellyfin on Roku which looks different from Jellyfin on WebOS. Some devices deliver Jellyfin through a thin browser client, and in those cases you get access to a unified design. Outside of that it’s a crapshoot as what the app will let you do. Of course, it’s a volunteer project (and all my thanks to any maniac willing to develop TV apps), so I don’t expect that everything can be easily and neatly unified.
I can’t deny that it’s sometimes hard to support my users because of this. Someone complains that they’re getting movies dubbed in an unwanted language: I can’t guarantee that the button to select audio track will look the same on their end when I talk them through it.
Ah, I see what you mean. Yeah, no way around that without a GPU or a processor with integrated graphics.
You should be able to get a used workstation GPU for $20-40 on eBay. Something from Dell, or a basic nvidia quadro would do the trick. If you could sell the 1660 super for more than that, could be worth the effort.
Alternatively, the 1660 Super would do the trick nicely if you ever needed to transcode video streams, like from running Jellyfin or Plex.
However, I was never able to have the server completely headless.
Depending on what you mean by “completely headless” it may or may not be possible.
Simplest solution: When you’re installing OS and setting up the system, you have a GPU and monitor for local access. Once you’ve configured ssh access, you no longer need the GPU or monitor. You could get by with a cheap “Just display something” graphics card and keep it permanently installed, only plugging in the monitor when something is not working right. This is what I used to do.
Downside: If you ever need to perform an OS reinstall, debug boot issues, or change BIOS settings, you will need to reconnect the monitor.
Medium tech solution: Install a cheap graphics card, and then connect your server with something like PiKVM or BliKVM. They can plug into your GPU and motherboard and provide a web interface to control your server physically. Everything from controlling physical power buttons to emulating a USB storage device is possible. You’ll be able to boot from cold start, install OS, and change BIOS settings without ever needing a physical monitor. This is what I do now.
Downsides: Additional cost to buy the KVM hardware, plus now you have to remember to keep your KVM software updated. Anyone who controls the KVM has equivalent physical access to the server, so keep it secure and off the public internet.
Perforce
We manage branches by taking an existing path on the perforce server, duplicating its contents, and then copying them to a differently named directory while registering that new path serverside.
So on paper, I can tell my local client to map my files to that new remote path, and then trigger a sync. In my experience, the sync treats my branch jumping as pulling completely new files. It touches everything in my work directory. As far as our makefiles are concerned, this means everything has to rebuild.
Reminds me of http://www.thecodelesscode.com/case/21
What, and miss out on all the overtime pay from fixing everything at the last minute?
Actually, C++. An enormous codebase plus we build all dependencies from source. I asked my dev lead why we don’t have access to pre-compiled dependencies and he answered with a mix of embarrassment and “that’s just how it’s done”.
A 4h build would be OK if I only needed to do it once. However, our source control system lacks even a basic conception of branches, so each new ticket requires destroying and regenerating your workspace.
SRE: