Do you put the version in each commit? That seems painful
Do you put the version in each commit? That seems painful
Does Actual support investment accounts / stocks? I was using beancount/fava for tracking, but have been lazy and haven’t updated it in a long time.
It’s already been recommended, but I think Grist or a lowcode/nocode thing like baserow or nocodb might work for you.
Also, I’d love to see what you come up with! My cats are picky eaters and I’ve been wanting to keep track of what wet food they like or not.
I use the Nexus free version. You can cache docker registries and other repos like apt/yum/pypi/etc.
It works pretty well, but could be overkill compared to some of the other options.
If the operator doesn’t allow it for some reason, uninstall it and try with the helm chart instead?
Or is there a reason to use the operator?
Why not?
You can use docker exec with garage docker image.
I’m on mobile but I think you just need something like: docker exec containerid ./garage stats
Garage is the simplest of the three imo.
I’ve only used it in a cluster, but it should be even simpler for one instance
A subscription is required??
What mobile issues do you have? I use it both on desktop and mobile with sync mode turned on in the PWA.
I like it, it seems pretty stable to me. I didn’t use it much before the query/template stuff was changed. I think both are fine right now, but don’t really know what it looked like before.
There’s also “space-script” now which is basically like mini javascript plugins you can write inside your notes. It’s what drew me away from trilium in the end.
I don’t blame you for taking a break if you ran into breaking changes though. That’s one benefit to keeping your notes in regular markdown files too.
Any comparisons to SilverBullet.md? It’s my favorite so far
Do you use garage for backups by any chance? I was wanting to deploy it in kubernetes, but one of my uses would be to back up volumes, and… that doesn’t really help me if the kubernetes cluster itself is broken somehow and I have to rebuild it.
I kind of want to avoid a separate cluster for storage or even separate vms. I’m still thinking of deploying garage in k8s, and then just using rclone or something to copy the contents from garage s3 to my nas
I really like mine too, I also have a tube and a pro. Both of them have a weird issue with the TV I use most often though. Both shields won’t display anything unless I boot them in safe mode.
They both work on a different tv that is 4k. This one is an older 1080p plasma. But it’s weird that it used to work just fine. It might be related to the TV, but no other devices have issues so it’s cheaper to replace one of the shields than buy a new tv lol.
I’m still using an Nvidia shield which I guess counts as an android box. I thought they’d release a new version by now, but I’m considering building a htpc instead.
I used to use a raspberry pi 2 or 3 and it worked fine for 1080p content. Not sure if the newer pis support 4k, but it’s on my list to look into eventually.
This is an option, my main reason for not wanting to use a hosted k8s service is cost. I already have the hardware, so I’d rather use it first if possible.
Though I have been thinking of converting some sites to be statically-generated and hosted externally.
Network Policies are a good idea, thanks.
I was more worried about escaping the container, but maybe I shouldn’t be. I’m using Talos now as the OS and there isn’t much on the OS as it is. I can probably also enforce all of my public services to run as non-root users and not allow privileged containers/etc.
Thanks for recommending crowdsec/falco too. I’ll look into those
It’s mostly working fine for me.
An alternative I tried before was just whitelisting which IPs are allowed to access specific ingresses, but having the ingress listen on both public/private networks. I like having a separate ingress controller better because I know the ingress isn’t accessible at all from a public ip. It keeps the logs separated as well.
Another alternative would be an external load balancer or reverse proxy that can access your cluster. It’d act as the “public” ingress, but would need to be configured to allow specific hostnames/services through.
I did actually consider a 3rd cluster for infra stuff like dns/monitoring/etc, but at the moment I have those things in separate vms so that they don’t depend on me not breaking kubernetes.
Do you have your actual public services running in the public cluster, or only the load balancer/ingress for those public resources?
Also how are you liking garage so far? I was looking at it (instead of minio) to set up backups for a few things.
I like the approach of ci pipelines just running a make command or at least a script, so that it’s easy to run locally too before pushing the changes up.