- 5 Posts
- 14 Comments
Burn1ngBull3t@lemmy.worldto Selfhosted@lemmy.world•What's up, selfhosters? - The Sunday threadEnglish1·6 months agoGood suggestion actually, i’ll head back to MetalLB docs. Thanks !
Burn1ngBull3t@lemmy.worldto Selfhosted@lemmy.world•What's up, selfhosters? - The Sunday threadEnglish3·6 months agoMany issues this week:
- Broke external-dns on my kube cluster because I updated my Pihole to v6
- Thinking of a way to expose a game server externally (usually used CF tunnels for specific services, but couldn’t get it to work cause it’s TCP/UDP and not HTTP traffic)
But at least i got my Velero backups working on an private S3
Burn1ngBull3t@lemmy.worldto Selfhosted@lemmy.world•How do you handle SSL certs and internet access in your setup?English4·8 months agoEither tailscale or cloudflare tunnels are the most adapted solution as other comments said.
For tailscale, as you already set it up, just make sure you have an exit node where your services are. I had to do a bit of tinkering to make sure that the ips were resolved : its just an argument to the tailscale command.
But if you dont want to use tailscale because its to complicated to your partner, then cloudlfare tunnels is the other way to go.
How it works is by creating a tunnel between your services and cloudlare, kind of how a vpn would work. You usually use the cloudlfared CLI or directly throught Cloudflare’s website to configure the tunnel. You should have a DNS imported to cloudflare by the way, because you have to do a binding such as : service.mydns.com -> myservice.local Cloudlfare can resolve your local service and expose it to a public url.
Just so you know, cloudlfare tunnels are free for some of that usage, however cloudlfare has the keys for your ssl traffic, so they in theory could have a look at your requests.
best of luck for the setup !
Burn1ngBull3t@lemmy.worldOPto Selfhosted@lemmy.world•[K8S] nfs and mounting problemsEnglish5·2 years agoHello @theit8514
You are actually spot on ^^
I did look in my exports file which was like so :
/mnt/DiskArray 192.168.0.16(rw) 192.168.0.65(rw)
I added a localhost line in case:
/mnt/DiskArray 127.0.0.1(rw) 192.168.0.16(rw) 192.168.0.65(rw)
It didn’t solve the problem. I went to investigate with the mount command:
-
Will mount on 192.168.0.65:
mount -t nfs 192.168.0.55:/mnt/DiskArray/mystuff/ /tmp/test
-
Will NOT mount on 192.168.0.55 (NAS):
mount -t nfs 192.168.0.55:/mnt/DiskArray/mystuff/ /tmp/test
-
Will mount on 192.168.0.55 (NAS):
mount -t nfs 127.0.0.1:/mnt/DiskArray/mystuff/ /tmp/test
The
mount -t nfs 192.168.0.55
is the one that the cluster does actually. So i either need to find a way for it to use 127.0.0.1 on the NAS machine, or use a hostname that might be better to resolveEDIT:
I was acutally WAY simpler.
I just added 192.168.0.55 to my /etc/exports file. It works fine now ^^
Thanks a lot for your help@theit8514@lemmy.world !
-
Burn1ngBull3t@lemmy.worldOPto Selfhosted@lemmy.world•Longhorn overkill for RAID ?English1·2 years agoExactly thanks!
Burn1ngBull3t@lemmy.worldOPto Selfhosted@lemmy.world•Longhorn overkill for RAID ?English2·2 years agoHaha sorry indeed, it’s Kubernetes related and not Windows WeDontSayItsName related 😄
Burn1ngBull3t@lemmy.worldOPto Selfhosted@lemmy.world•Longhorn overkill for RAID ?English1·2 years agoYou are completely right.
However in my mind (might be wrong here) if I use another node, i wouldn’t use the RAID array completely.
While setup up i thought that its either:
- NAS storageClass attached to the RAID array, no longhorn
- with longhorn when there is no RAID, but replication at 3
In either case, the availability of my data would be quite the same right ?
(Then there is options to backup my PV to s3 with longhorn and all that i would have to setup again though )
Thanks for your answer !
Burn1ngBull3t@lemmy.worldOPto Selfhosted@lemmy.world•Longhorn overkill for RAID ?English1·2 years agoI would guess it doesn’t like replica at 1 indeed.
And using a NAS would be a single point of failure indeed, but how I’m using Longhorn right now already is (my storage node goes down, my cluster would be unstable)
Thanks !
Burn1ngBull3t@lemmy.worldOPto Selfhosted@lemmy.world•Longhorn overkill for RAID ?English1·2 years agoHello ! Thanks for your response!
Yes RAID is used as availability of my data here, with or without longhorn, there wouldn’t be much difference there (especially since i only use one specific node)
And you would be right, since the other nodes are unscheduled, it will be available only on my “storage node” so if this one goes down my storage goes down.
That’s why i might be overkill with longhorn, but there are functions to restore and backup to s3 for exemple that i would need to setup i guess
Burn1ngBull3t@lemmy.worldto Selfhosted@lemmy.world•Questions on backing up to S3 Glacier Deep Archive.English1·2 years agoHello ! Just adding my two cents for Scaleway. I’ve used them personally for some services (and probably will add s3 storage in the near future)
It’s seems pretty reliable in my opinion.
I think you are right indeed, i had the idea to maybe use the GC for AI stuff and play with it. I would probably go with kube and add the NAS in longhorn (that i already set up)
Would have been cool to add yet another machine to the cluster, especially if i could use the NAS for the kube VolumeClaims. 🤔
Also true
Yeah i was wondering how you actually use versioning with that drag and drop. Homepage seems better for that IMO