What’s your reason for using HTTP? That seems like a really bad idea this day in age, ESPECIALLY if that’s something you’re going to make available on the internet.
What’s your reason for using HTTP? That seems like a really bad idea this day in age, ESPECIALLY if that’s something you’re going to make available on the internet.
A reverse proxy is basically a landing place that acts as a middle man between the client and the server. Most people set it up so that all traffic on 80 or 443 go to the reverse proxy, and then the reverse proxy gets the correct website based on the host header of the request.
If you are currently serving multiple websites on your server, then that means you are serving each website on a different port.
So, just make sure that the reverse proxy is serving on a port that is not used by your other sites. It will only respond on it’s own port, and it will only serve the site(s) that you have configured in the proxy.
You’ll be fine!
I hear you… it’s definitely not about one tasting better than others, but maybe more about the eating experience. I do think there’s a legitimate argument about how different pasta shapes encourage different pasta to sauce ratios, but at the end of the day it’s just the two elements coming together and the taste is what it is. We should all enjoy it the way we want to! I just wanted to explain why some people talk about certain sauces and certain pasta shapes “belonging” together.
It has everything to do with the consistency of the sauce and how well it sticks to the pasta. For example, spaghetti with a meat sauce isn’t a great choice because the meat won’t actually stick to the pasta and you’ll have to scoop up that meat “manually.” Better is pappardelle, which has a huge surface area that causes the meat to stick to the pasta.
ahhhh yes, that makes perfect sense… thank you for pointing that out! Especially since I’m not good enough with vi
to know how to bulk delete the first character in specific lines, I had to manually arrow and delete.
I successfully migrated postgres 15 to 16. I followed the general idea of the guide you posted, but I found it a little easier to do a slightly different process. Here’s what I did:
docker-compose down
for the lemmy instance2. edit the docker-compose.yml
file and comment out all of the services except postgres. In addition, add a new volume to the postgres service that looks something like this: - ./volumes/miscfiles:/miscfiles
docker-compose.yml
file and add a new volume to the postgres service that looks something like this: - ./volumes/miscfiles:/miscfiles
docker-compose up -d postgres
(this starts just the postgres
service from the docker compose file)docker exec -it [container name] pg_dumpall -U [username] -f /miscfiles/pgdumpall20240628
(I think this will work, but it’s not exactly what I did… rather, I ran docker exec -it [container name] bash
, and then ran pgdumpall -U [username] -f /miscfiles/pgdumpall20240628
. The end result is a dumpall file saved in the ./volumes/miscfiles
directory on the host machine)docker-compose down
mv ./volumes/postgres ./volumes/postgresBAK20240628
(move your existing postgres data to a new directory for backup purposes)mkdir ./volumes/postgres
(re-create an empty postgres data folder. make sure the owner and permissions match the postgresBAK20240628
directory)docker-compose.yml
and update the postgres image tag to the new versiondocker-compose up -d postgres
(you’ll now have a brand new postgres container running with the new version)docker-exec -it [container name] psql -U [username] -f /miscfiles/pgdumpall20240628
(again, I think this will work, but I bash
ed in and ran the command from within the container. This also allows you to watch the file execute all of the commands… I don’t know if it will do that if you run it from the host.)docker-compose down
12. edit the docker-compose.yml
and un-comment all of the other services that you commented out in step 2
docker-compose up -d
Hopefully that helps anyone that might need it!
edited to reflect the comment below
Thank you, that’s super helpful!
Depending on what exactly you’re looking for, Photostructure might be a good option. It’s got a great UI for viewing photos, and it’s meant to play well with other Metadata software.
I have server2
(which replaced server1
). I also have ‘nvr1’.
telegraf is so easy to use and extend
Definitely… you can write custom scripts that Telegraf will run and write that data to Influx. For instance, I have one that writes the Gateway status information from pfSense so I can track and graph any internet downtime.
CPU/RAM/Disk/Network etc. get written to Influxdb via Telegraf, and visualized with Grafana.
Logging and errors go to Graylog stack (Mongodb, Opensearch, Graylog).
Unraid
because it’s much, much faster and easier to consume content via video
That totally depends on the content. Using your example, yes, a video of an explosion is going be much more efficient than a block of text about the same explosion. But for something like this, I find it MUCH slower to try to glean the relevant information from a video than from an article. An article can be skimmed easily so I only have to focus on the parts that I care about. Skimming a video, on the other hand, is a pain. Also, if the content is a step-by-step how-to, the video might be OK as long as I can follow along in real time. However, if I have to keep pausing and going back to rewatch a section, then an article is going to be easier to work with.
If you look at the markings on the baffle in the T320, it’s marked to indicate the second CPU as well as the second bank of RAM slots. I think it’s safe to say it’s identical.
Any SATA or SAS drive should work just fine. If you need some hot swap caddies, you can buy them fairly cheap on Amazon: https://www.amazon.com/s?k=dell+t320+drive+caddy
I’ve personally used the WorkDone brand caddies, and they are perfectly usable!
edit: I’m fairly certain it doesn’t support four 3.5 AND eight 2.5… the form factor supported depends on the backplane that’s installed. Also know that the backplane physically supports double that number of hard drives… you’d just need an HBA card with two internal ports. See this list for some options: (https://forums.serverbuilds.net/t/official-recommended-sas2-hba-internal-external/4581)
Is there not a native Nextcloud app for your tablet?
That doesn’t make any sense… the fact that it’s only used in part of the world makes it even more useful for the bot to define it.
Is there a reason why your bot doesn’t define CSAM?
I use a Graylog/Opensearch/Mongodb stack to log everything. I spent a good amount of time writing parsers for each source, but the benefit is that everything is normalized to make searching easier. I’m happy with it as a solution!
Gotcha… as long as you understand that any device that receives that traffic can see exactly what’s in it! (no sarcasm intended at all… if you’re informed of the risk and OK with it, then all is well!)