Greetings, so I final got wife permission to buy a pi zero 2 and a beeline 12s pro (n100) arriving tomorrow. I already have a nas drive for my media.

Question is what is the average setup and guides for this?

Of course I will be scouring this and other communities for info but the immediate items I want to fix are my plex/jellyfin server, setup RetroArch or equivalent gaming, then of course arr servers. But I would like to also get into reverse proxy and searxng, next cloud and pihole.

Any tips on how to make this beautiful?

OS recommendations? I currently run manjaro on my daily, but would think a kubuntu or kde fedora/debian spin might be better for these items.

Guides you can point me to? Suggestions for more or better options? There are plenty of answers in this community and I will look at what’s posted but any assistance is appreciated.

Thank you in advance.

I’m excited to start plying with the simple things

    • Dran@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      Sure. I have an r630 that is configured as an NFS server and a docker host called vacuum. There is a script called install_vacuum.sh that with a single command, can build the server to my spec from a base install of Ubuntu 24.04. it has functions to install base packages from repositories, add new repositories, set up users, create config files for NFS, smb, fstab, crontab, etc… once an NFS server exists on my network, any other server could be my docker host. My docker host is set up from a script install_containers.sh. as with before, it does all the things to get me a basic docker host, firewalled, and configured for persistence via my NFS server. It also has functions to create and start docker containers for all of my workflows (Plex, webserver, CA, etc), and if those containers don’t exist, it will build a docker image for said workflow based on a standardized format (you guessed it) bash build script for the containers. There is automation via cron on whatever host runs docker to build and update the containers once a week, bare-metal servers update themselves nightly, rebooting when necessary via unattended-upgrades.

      Basically, you break everything down into the simplest function possible, have everything defined via variables in shared configurations that everything sources before running, and you have higher and higher level functions call other functions until you have a single function that cascades into a functioning system. Does that make sense?