It abstracts away llama.cpp in a way that, frankly, leaves a lot of performance and quality on the table.
OP, do you have any telemetry you can show us comparing the performance difference between what you setup on this guide and an Ollama setup? Otherwise, at face value, I’m going to assume this is another thing on the internet i have to assume is uncorroborated bullshit. Apologies for sounding rude.
I don’t like some things about the devs. I won’t rant, but I especially don’t like the hint they’re cooking up something commercial.
This concerns me. Please provide links for us to read here about this. I would like any excuse to uninstall Ollama. Thank you!
The defense systems being sent there are not only highly advanced, but are also extremely expensive to the taxpayer, as well as the training to operate them is compartmentalized under high levels of secrecy even against allies such as Israel.
It would be irresponsible for the military to send it over there without the proper personnel with the right clearances and training to operate it.
If you don’t believe me, try joining the military and attempt to become a Patriot Missile Battery operator and let us know how it goes with the many times your background check fails.
This almost looks like a screenshot from a modern video game.
In the app - Profile - Data & Privacy - Personalized Shopping
deleted by creator
I’m self hosting Vaultwarden and my home server got killed by the hurricane, yet I can still access my passwords just fine on the app because it stores them locally encrypted on my phone from the last time it synced. I just can’t update or change anything until I can bring everything back on.
So, host your own shit you cowards, it’ll be fine.
Yeah, either secure copy (SCP) or rsync are the ways to do this securely.
Why not use this and select whatever LLM to leverage as a RAG? It literally allows you to self host the model and select any model for both chat and RAG analysis. I have it set to Hermes3 8B for chat and a 1.3B Llama3 as the RAG.
You don’t even need a GPU, i can run Ollama Open-WebUI on my CPU with an 8B model fast af
Some animals, such as certain deep water crustaceans (Matis Shrimp) and cephalopods (Cuttlefish) can see more colors than most mammals, and their brains are often smaller.
Okay thank you for the elaboration. I am very dumb and impatient. We appreciate you
There are plenty of step by step guides and videos for most things, especially popular tools like this.
And of which you provided zero directions on where to look.
The servarr wiki has install and setup instructions for all of the core arr suite apps as well, both install guides and quick start guides: https://wiki.servarr.com/readarr
I read through the site and it gets to a part where it assumes I know how to setup a port reverse proxy on a server. Definitely not friendly for tech illiterate people such as myself. So this is a dogshit instruction.
Qbittorrent (torrent client) is also easy to install on windows or Linux: https://www.qbittorrent.org/ . You’re also welcome to pick another one, I just like qbittorrent.
Cool. Now where the hell do I find the books? Your instructions also suck for tech illiterate people.
Apologies for sounding rude, but you guys all preach this shit but there’s nowhere to read where they teach dumb morons like me to do this without already knowing high level networking protocols and manual VPN configuration management. And it’s really frustrating.
Too bad there’s no easy way for a tech illiterate dumb person such as myself to read a step-by-fucking-step instruction to get it all working for myself.
Yes, most Kindles allow you to load your own PDFs and .ebook files, so pirating them is inconsequential.
I only use the most expensive of microwaves that rotate the actual causal reality of the earth-itself around the microwave.
Why not simply host your own LLM and leverage that beefy gaming GPU you very likely own to do the work?