I feel like most of these people were way over analyzing the questions. No reason to look for in depth meaning of possible answers, just answer them and take them at face value.
Honest question - is GitLab really that different of a vendor lock-in over GitHub?
If she woke up to a vibration from a watch, I bet she’d wake up hearing motorized blinds.
It doesn’t kinda feel that way, doesn’t it?
Funny that predictive text seems to be more advanced in this instance but I suppose this is one of those scenarios that you want to make sure you get right.
What’s the problem with just not using the portion of the service you do not wish to use? For almost everyone, the integration with email for the calendar is what actually makes it function, where you will be interacting with other people. Most people who want to create a new, unique calendar will just create an additional one in an existing account if they want a separate calendar for a certain purpose.
That’s what I do with my wife for events that we both need to know about. So we have a calendar that is just our stuff and we both subscribe to it (or more like she has the calendar shared with her from my account) but she has permissions to add/remove things. Is there some reason you need a completely separate calendar on a unique service? I feel like we are missing something about your use case to actually be able to understand what you are trying to do.
I would also second Hugo which I use for my personal site and blog which I haven’t updated for a long time. Nice thing is that it has a minimal footprint of needing to watch out for updates unlike something like Wordpress which was known for being vulnerable stable if left unmaintained. It’s mostly looking out for old themes with vulnerable javascript.
Another popular options is Jekyll and I honestly can’t remember why I picked Hugo over it but if you don’t need dynamic content, why make things more complex?
I use apt cacher ng. Most of my use case though is for caching of packages related to Docker image builds as I build up to 200+ images daily. In reality, I have aggressive image caching so I don’t actually build anywhere close to that many each day but the stats are impressive. 8.1 GB of data fetched from the internet but 108 GB served from the acng instance as it shows in the stats page of recent history.
I have two internet connections - one is fiber and the other is cable. My cable is the backup connection and is a lower tier offering with a 1.2 TB/month cap while my primary fiber is 1gig symmetrical with no data cap. I use pfsense to handle failover in case of an outage.
I also use acme.sh. It has worked great for me and was dead simple to use. Super flexible on what it can do from just renewing the certs to web server integration. Love the simple to use hooks available too.
Check out Plexamp, the Plex music streaming client.
This is why I take a multivitamin. I deal with low iron and it helps a bit. Gotta be careful thought if you do have an iron deficiency since many multivitamins don’t have iron. I could just take iron supplements but my doctor agreed that it was a good idea to just go with a multivitamin.
I user homer. Really simple, basic config and it looks nice. The stats are pretty cool for certain integrations and are easy to add - I’ve added a few myself for services that didn’t have them. Only issue is slow PR review.
Human for scale.
I’m in a similar boat except I just do everything on standard Docker containers but so do use Telegraf, Influx, and Grafana for everything. I’ve gone mostly to Discord notifications on any alerts. If I run into any problem scenarios, I figure out how to monitor it and add it via Telegraf and add an alert. I’m still just using Grafana alerts but it works fine for my home lab.
Even better if I can automate fixes to those problems. One of the best things I did was monitoring all of my network devices and all major hops. If I have internet or network issues, I know exactly where the problem is without having to troubleshoot. Lots of dpinger and shell scripts to input data to Telegraf.
We didn’t have many CDs even after CD players had become pretty popular so we had Columbia House. We would typically the the 3 CDs that you bought for a special price from a catalog and nothing more. It was helpful in getting a decent collection pretty quickly though. Almost all of my CDs as a kid came from them.
You can do TCP proxying with nginx but many of the same features available in haproxy are behind the paywall. In nginx, layer 4 connections are dealt with through streams. You can do both TCP and UDP. I stick with haproxy for TCP streams with very few exceptions. HAproxy is most definitely more robust for situations where you have a pool of upstream servers. For single upstream instances, it’s not terrible. Most of the features I would use for better control of how the failover and balancing would work isn’t available in the open source nginx.
but…but…but…he said he would fix it! Sure he didn’t say how but he would, right? Right?!?
Trusting a con man…