Amazing stuff. Thank you so much!
Amazing stuff. Thank you so much!
It is easier to think of the SSL termination in legs.
If, however, you want to directly expose your service without orange cloud (running a game server on the same subdomain for example), then you’d disable the orange cloud and do Let’s Encrypt or deploy your own certificate on your reverse proxy.
Looking great! I think it would be amazing if there are filters for processor generations as well as form factor. Thanks for sharing this tool!
Another possibility: the console vendors are catching whiff of the whole gate keeper mess, and they don’t want to be at risk of being forced to open up their physical cartridge DRM mechanisms to allow third party cartridges for the relatively small EU market (compared to the rest of the world). Moving towards digital is much easier as result.
It is also clear as day that they’re testing the waters with Nintendo players who are generally deemed to be more casual, and lesser likely to push back compared to the more savvy other major consoles. Once this blows over, they will just move to digital everything across the board, citing successes and cost savings on the other platforms as basis for the move.
In the old days, it used to be a problem because everyone just connect their windows 98 desktop with all their services directly exposed to the internet because they’re using dial up internet without the concept of a gateway that prevents internet from accessing internal resources. Now days, you’re most likely behind your ISP router that doesn’t forward ports by default, and you’re only exposing the things you’d actually want to expose.
For things you’d actually want to expose, having a service on the default port is fine, and reduces the chances of other systems interacting with it failing because they’d expect it on the default port. Moving them to a different port is just security through obscurity, and honestly doesn’t add too much value. You can port scan the entire public IPv4 space fairly quickly fairly cheaply. In fact, it is most likely that it’s already been mapped:
https://www.shodan.io/host/<your-ip-here>
Keeping the service up-to-date regularly and applying best practices around it would be much more important and beneficial. For SSH, make sure you’re using key based authentication, and have password based authentication disabled; add fail2ban to automatically ban those trying to brute force. For Minecraft, online mode and white listed only unless you’re running a public one for everyone.
I’m not saying you’re wrong — I’ve even upvoted your earlier comments because I’m generally in agreement; you’re an instance admin judging by your handle, go and check the vote history yourself lol.
I’m saying people shouldn’t force their janky unproven solo solution on to someone else who doesn’t have their level of distrust, and would just rather trust the multibillion multinational corporation, when all they want is something that’s been working fine for them for all they care.
There’s always the add more of everything so something could fail without impacting the stability aspect, and that’s great for a corporation needing the redundancy; but it’s probably prudent to not forget there’s also the “I’m interested in learning” aspect, where people running a home server to play with software side of things.
You’re spot on in that we’d need to know what it is that OP would like to do with the system, but I’m getting the feeling that stability isn’t that high of a concern just yet.
Until the basement floods and the server goes offline for a few days; or botched upgrade that’s failing quietly; over zealous spam assassin configuration; etc etc
It sounded like they were trying to archive things from Gmail to their own server, so just cut the middleman jank out, and let the wife continue to use her Gmail as intended.
Or better yet, let her keep her gmail. Don’t force any lab instability on to others… especially email. One lost important email (even if not your fault) and you’ll never hear the end of it.
The answer depends on how you’re serving your content. Based on what you’ve described about your setup, your content is likely served over HTTP through the secured tunnel. The tunnel acts like an encrypted VPN, which allows unencrypted content to be sent securely over the wire. This means although your web server is serving unencrypted content, it gets encrypted before it goes to Cloudflare, so no one along the path could snoop on it.
I don’t care for the argument one way or another; I’m not an EU resident and the whole thing is irrelevant to me as an individual.
I’m merely pointing out neither the Fediverse/Lemmy/etc. nor Reddit as a platform cares for EU’s privacy concerns, and people should be well informed when entering either platforms, so they’re not doing so with the false sense of security that they’d be able to exercise those government granted rights effectively.
Good luck with that. Once the post federates out, the host instance can request for deletion, but any federated instances that receives the content doesn’t necessarily have to follow that request. They could easily modify their instance to not delete, they may reactivate the content from moderation log, they might have backup strategies that involves retaining data (for their own local legal reasons), etc etc.
It’s probably best to assume any content that you post on Lemmy are out of your control and will live for much longer than you’d expect.
This is not limited to just Lemmy but any federated systems. So regardless centralized corporation behind the service, or an open federated system; one way or another, whatever you post out there, its no longer yours to control.
No PRs means no automated tests/CI/CD, which means you’d slow down the release train. It might typically be just a 2 minutes quick cycle, but that one time it goes off for longer due to a botched update from upstream means you’re never going to do that again during business hours.
Must be very unique sector. Good luck with your explorations!
I’m aware this is the selfhost community, but for a company of 20 engineers, it is probably best to use something commercial in the cloud.
Biggest pain point was for our ops guy, who constantly had to stay behind to perform upgrades and maintenance, as they couldn’t do it during business hours when the engineers are working. With a team of at least 20, scheduling downtimes could get increasingly more difficult.
It also adds an entire system to be audited by the auditors.
The selfhost vs buy commercial kind of bounces back and forth. For smaller teams, less than 5 to 10 engineers, it might be a fun endeavour; but from that point on, until you get to mega corp scale with dedicated ops department maintaining your entire infrastructure, it is probably more effective to just pay for a solution from a major vendor in the cloud instead.
ddclient paired with a supported provider.
You really should have separate services for registration, DNS and hosting. That way you’re not held hostage by a single provider.
Are you by chance using something like Cloudflare? It may be possible that during the reboot the static IP changed, so your “gateway” cannot reach your router on your old IP no more?
In other words : it’s always the DNS?
Using Ollama to try a couple of models right now for an idea. I’ve tried to run Llama 3.2 and Qwen 2.5 3b, both of which fits my 3050 6G’s VRAM. I’ve also tried for fun to use Qwen 2.5 32b, which fits in my RAM (I’ve got 128G) but it was only able to reply a couple of tokens per second, thereby making it very much a non-interactive experience. Will need to explore the response time piece a bit further to see if there are ways I can lean on larger models with longer delays still.