

Are all the *arr services aware that they are expected to have a certain basepath?
Are all the *arr services aware that they are expected to have a certain basepath?
LOL, ok, fair 😁
You should in any case consider your backup strategy. If you have reliable backups, your fuckups can’t be as bad anymore. If you don’t have reliable backups, a “raw” storage doesn’t help you either. Maybe even the contrary: you won’t notice, if individual files get corrupted or even lost until it’s too late. (Not talking about disk corruption, against which the right filesystem can guard you… but I am not sure you trust filesystems either 😛)
Why does the storage layer of seafile scare you? Are you also scared of databases and prefer storing things in raw txt files? The difference is the same. You get certain features in return:
You still have access via:
I don’t like the syntax, the runtime environment (which runs interpreted) and for PHP more than many other languages (aside from JS), a lot of code out there is hacked together horribly which makes me completely distrust the community.
Personally I stay away from anything that doesn’t have a compiler.
I was in the same boat and therefore my nextcloud instance was mostly running for backwards compatibility with a few setups I have, while I mostly use seafile, immich and sogo. But a few days ago I updated to nextcloud hub 10 (I think that’s with nextcloud 31 under the hood) and damn does that run smooth. I was so impressed I got motivated to finally setup the high performance backend for nc talk.
I still dislike PHP, but nextcloud just won back my heart a little.
On mobile I indeed also had that issue once. However I made sure they can’t lock me out completely. The db is stored using the opensource sqlcipher, so one can open it and extract everything manually, if absolutely necessary. As long as they don’t change this, I am fine. In the worst case that would still be a lot of effort for me, but not impossible.
The export has also improved a lot. You can now also export to JSON which includes all the data one could need.
If you don’t have a hard requirement of it being fully (!) OpenSource, then I would recommend Enpass. Relatively pleasing UI that runs native on Win, Mac, Linux, Android and iOS. It has browser plugins for Chrome and Firefox that talk directly to the running fat client (so no multiple authentication with different browsers necessary).
The password db is completely local, but it offeres several sync mechanisms like WebDAV or Dropbox or also iCloud; basically whatever can store files. If it’s a NAS in your home, it simply will sync once you are back home.
It also offers “WiFi Sync”, in which case you designate one machine running Enpass as the server and link other clients to it, then you don’t even need to run a separate hosting for it (but that machine needs to be on and running Enpass when you want to sync, obviously).
It’s basically a less open but much more convenient and beautiful KeePass(XC).
Nvidia rightfully earned their bad reputation on linux,
Really? IMO not with GPUs. They have released linux drivers for decades, and always in time for new kernel versions. ATI was typically way behind and buggy as hell. I would likely not have switched to Linux on the desktop in 2006 if it wasn’t for my GPU “just working”, without any fiddling. Performance was always equal to Windows and stuff like multimonitoring just worked. They even had their nice setup utility to configure Xorg for you.
Could they have handled the transition to Wayland better? Maybe. But claiming they earned a bad reputation in regards to GPU when they are the one big vendor that had extremely active linux support for ages is dishonest and unwarranted, IMO.
I think CryptPad has delete-after-view.
Edit: yes, it has
True.
Although in Germany for example it can also be an issue when recording. If you have a security camera pointed at a public space (that can include the sidewalk infront of your house), passersby can sue you to take it down and potentially get you fined. Even pretending to constantly record such an area can yield that result.
So, buzzer WRONG.
Quite arrogant after you just constructed a faulty comparison.
If I say my name is Doo doo head, in a public park, and someone happens to overhear it - they can do with that information whatever they want. Same thing.
That’s absolutely not the same thing. Overhearing something that is in the background is fundamentally different from actively recording everything going on in a public space. You film yourself or some performance in a park and someone happens to be in the background? No problem. You build a system to identify everyone in the park and collect recordings of their conversations? Absolutely a problem, depending on the jurisdiction. The intent of the recording(s) and the reasonable expectations of the people recorded are factored in in many jurisdictions, and being in public doesn’t automatically entail consent to being recorded.
See for example https://www.freedomforum.org/recording-in-public/
(And just to clarify: I am not arguing against your explanation of Twitch’s TOS, only against the bad comparison you brought.)
No, I keep that private to minimize the information I leak about what I host, sorry. (I also don’t do git-ops for my server; I back the mentioned directories up via kopia so in case of recovery I just restore the last working state of data+config. I don’t have much need to version the configs.)
What I did to get rid of my mess, was to containerize service after service using podman. I mount all volumes in a unified location and define all containers as quadlets (systemd services). My backup therefore consists of the base directory where all my container volumes live in subdirectories and the directory with the systemd units for the quadlets.
That way I was able to slowly unify my setup without risking to break all at once. Plus, I can easily replicate it on any server that has podman.
No, since at the moment it wants to manage certificates, but I don’t intend to run pangolin as my main reverse proxy.
Pangolin is the most user friendly self hosted alternative to Cloudflare tunnels. There are dozens alternatives, but none with that feature set and such a UI.
Yes. You can simply not expose SMTP at all and just use the IMAP/JMAP part. Unless you need also JMAP, I am not sure it brings you a lot to the table you wouldn’t also get from a good old dovecot. IMO the big advantage of Stalwart is the all-in-one package it delivers plus the good defaults. It also shines when you want a multi node deployment. For a single node IMAP only it might not be the best choice, in my opinion. But it would work, if you want to.
We can ask, but the indicators are there:
It aims at both, otherwise it wouldn’t ship with sqlite and rocksdb. Stalwarts default is clearly for single node setups and expanding it to clustering takes further steps. So while it supports large scale deployments, it should not be limited to it.
It’s a 0.x release. It makes sense building the intended features first before optimizing heavily. There’s no point having an optimized data structure that then falls flat once you need to add new features that brings new requirements to the data structure.
Once they label it 1.x (i.e. feature complete and production ready) I would expect it to be optimized. If it isn’t, criticism is warranted.
Does it make a difference, if that setting uses a trailing slash? Might be it redirects you to the path without, which triggers caddy to redirect you again, and so on and so forth.
You could also, instead of redirecting, rewrite it. Then it is handled serverside without sending the client somewhere else.