• 0 Posts
  • 32 Comments
Joined 2 years ago
cake
Cake day: September 7th, 2023

help-circle

  • I wasn’t looking for technical support. You can do everything correctly and still get your mails randomly marked as spam or not delivered at all. This has happened to us, some of our customers, multiple smaller email providers as well as several municipalities (imagine blackholing government emails, what a grand idea). They don’t send sensible return headers, they might not even return your undelivered mail at all, they won’t react to any inquiries to their postmaster contact (or anywhere else really), they will blacklist entire IP blocks sometimes. The only way to sidestep any issues with them is to pay a few thousand bucks to enter their cool kids club certified sender alliance, which is what the big marketing firms use to deliver mass amounts of unwanted ads unhindered through their networks.



  • I’ve had the opposite experience with their cloud services in a professional context. My biggest gripe is with United Internet, the monopolistic company that owns IONOS, 1&1 (an ISP) as well as the ad-ridden, flaming pile of garbage that are GMX and WEB.DE, two of the most popular email service providers in Germany as well as a constant source of pain for anyone operating an Email server. They will ignore common industry standards and best-practices, silently block your mailserver for absolutely no reason, not respond to inquiries and just generally make the internet a slightly worse place for small to medium sized businesses and selfhosters.





  • Imagine a tool that gives you a language in which you can describe the hardware resources you want from a cloud provider. Say you want multiple different classes of servers with different sets of firewall rules. Something like Terraform allows you to put that into a text-based form, make changes to it, re-run the tool and expect resources to be created, changed and destroyed to match what you wrote down.



  • I wouldn’t recommend Docker for a production environment either, but there are plenty of container-based solutions that use OCI compatible images just fine and they are very widely used in production. Having said that, plenty of people run docker images in a homelab setting and they work fine. I don’t like running rootful containers under a system daemon, but calling it a giant mess doesn’t seem fair in my experience.


  • XML aims to be both human-readable and machine-readable, but manages neither. It’s only really worth it if you actually need the complexity or extensibility, otherwise it’s just a major pain to map XML structures to any sensible type representation. I’ve been forced to work with some of the protocols that people like to present as examples of good XML usage and I hate every single one of them.

    Fuck YAML though. That spec is longer and more complex than any other markup language I know of and it doesn’t have a single fully compliant implementation.





  • With bluray rips, I don’t really see any way to avoid that unfortunately, unless someone else has already added the hashes for your release. Most people use it to scan their encoded releases, which will (in most cases) have already been added to AniDB by the release group. I’m a bit surprised though, that none of your rips are recognized. Have you checked the AniDB pages for your series to see if anyone uploaded hashes for bluray rips?



  • Shoko compares a files ED2K hash against the AniDB database. The filename doesn’t matter for automatic detection. Have a look at the log to see if there are any issues. It’s entirely possible that AniDB just doesn’t have the hashes for the raw BluRay rip. In that case you can either manually link them in Shoko, connecting the AniDB episode id to the file hash, or create new file entries on AniDB with your specific hashes.