Speaking as a Floridian and general Southerner… could not disagree more.
Speaking as a Floridian and general Southerner… could not disagree more.
Of course, there are advantages to statehood as well. Having your citizens fully eligible for all forms of federal assistance, when your population’s average wealth and income is well below national averages? That has some advantages.
Especially since Puerto Rico is right in hurricane alley. This is going to become increasinly relevant, fast.
Also, it seems like such a huge issue to force unless there’s a giant majority one way or another, which would only happen if there was some MASSIVE benefit or detriment to becoming a state (which is totally possible). If they marginally decide to go one way, and the next years the population realizes it was a huge mistake due to changes, they’re screwed.
Just as an example, the U.S. could actually go mad, and Peurto Rico would want nothing to do with them. If they were already a state… that would suck. Alternatively, maybe we hit some climate tipping point sooner than expected, and hurricanes become such an existential threat that they need federal help to deal with them. If they became independant, well, that’s not an option anymore. Both very possible scenarios.
To actually answer this, you could look into free APIs of open source models, which have daily limits but are otherwise largely catch-free. You could even mirror endpoints on your VPS if you need to, or host “middleware” like prompt formatters and enhancers.
I say this because, as others said, you cannot actually host AI on a VPS…
To go into more detail:
Exllama is faster than llama.cpp with all other things being equal.
exllama’s quantized KV cache implementation is also far superior, and nearly lossless at Q4 while llama.cpp is nearly unusable at Q4 (and needs to be turned up to Q5_1/Q4_0 or Q8_0/Q4_1 for good quality)
With ollama specifically, you get locked out of a lot of knobs like this enhanced llama.cpp KV cache quantization, more advanced quantization (like iMatrix IQ quantizations or the ARM/AVX optimized Q4_0_4_4/Q4_0_8_8 quantizations), advanced sampling like DRY, batched inference and such.
It’s not evidence or options… it’s missing features, thats my big issue with ollama. I simply get far worse, and far slower, LLM responses out of ollama than tabbyAPI/EXUI on the same hardware, and there’s no way around it.
Also, I’ve been frustrated with implementation bugs in llama.cpp specifically, like how llama 3.1 (for instance) was bugged past 8K at launch because it doesn’t properly support its rope scaling. Ollama inherits all these quirks.
I don’t want to go into the issues I have with the ollama devs behavior though, as that’s way more subjective.
It’s less optimal.
On a 3090, I simply can’t run Command-R or Qwen 2.5 34B well at 64K-80K context with ollama. Its slow even at lower context, the lack of DRY sampling and some other things majorly hit quality.
Ollama is meant to be turnkey, and thats fine, but LLMs are extremely resource intense. Sometimes the manual setup/configuration is worth it to squeeze out every ounce of extra performance and quantization quality.
Even on CPU-only setups, you are missing out on (for instance) the CPU-optimized quantizations llama.cpp offers now, or the more advanced sampling kobold.cpp offers, or more fine grained tuning of flash attention configs, or batched inference, just to start.
And as I hinted at, I don’t like some other aspects of ollama, like how they “leech” off llama.cpp and kinda hide the association without contributing upstream, some hype and controversies in the past, and hints that they may be cooking up something commercial.
Nah, I should have mentioned it but exui is it’s own “server” like TabbyAPI.
Just run exui on the host that would normally serve tabby, and access the web ui through a browser.
If you need an API server, TabbyAPI fills that role.
Shrug did you grab an older Qwen GGUF? The series goes pretty far back, and its possible you grabbed one that doesn’t support GQA or something like that.
Doesn’t really matter though, as long as it works!
Your post is suggesting that the same models with the same parameters generate different result when run on different backends
Yes… sort of. Different backends support different quantization schemes, for both the weights and the KV cache (the context). There are all sorts of tradeoffs.
There are even more exotic weight quantization schemes (ALQM, VPTQ) that are much more VRAM efficient than llama.cpp or exllama, but I skipped mentioning them (unless somedone asked) because they’re so clunky to setup.
Different backends also support different samplers. exllama and kobold.cpp tend to be at the cutting edge of this, with things like DRY for better long-form generation or grammar.
So there are multiple ways to split models across GPUs, (layer splitting, which uses one GPU then another, expert parallelism, which puts different experts on different GPUs), but the way you’re interested in is “tensor parallelism”
This requires a lot of communication between the GPUs, and NVLink speeds that up dramatically.
It comes down to this: If you’re more interested in raw generation speed, especially with parallel calls of smaller models, and/or you don’t care about long context (with 4K being plenty), use Aphrodite. It will ultimately be faster.
But if you simply want to stuff the best/highest quality model you can at VRAM, especially at longer context (>4K), use TabbyAPI. Its tensor parallelism only works over PCIe, so it will be a bit slower, but it will still stream text much faster than you can read. It can simply hold bigger, better models at higher quality in the same 48GB VRAM pool.
It’s probably much smaller than whatever other GGUF you got, aka more tightly quantized.
Look at the filesize, thats basically how much RAM it takes.
Try this one at least, it should still leave plenty of RAM free: https://huggingface.co/bartowski/Qwen2.5-Coder-7B-Instruct-GGUF/blob/main/Qwen2.5-Coder-7B-Instruct-IQ4_XS.gguf
Try reducing the context size, and make sure Q8/Q8 flash attention is enabled with flags.
I’d link a specific GGUF quantization, but huggingface seems to be down for me!
The best hope for you is ZLUDA’s revival. It’s explicitly targeting LLM runtimes now, and RDNA1 (aka your 5600XT) is the oldest supported generation.
https://www.phoronix.com/news/ZLUDA-Third-Life
TBH you should consider using free llama/qwen APIs as well, when appropriate.
You can only allocate so much to metal backends, and if you are on (say) an 8GB Mac there won’t be much RAM left for the LLM itself.
But still, use a tighter quantization (like an IQ4 or IQ3_KM) of Qwen Coder 7B, and close as many background programs as you can. It should be small enough to fit.
deleted by creator
Also, AMD is not off the table for multi-gpu. I know some LLM runners are buying used 32GB MI100s.
mmmm I would not use the AUR version, especially on Fedora. It probably relies on a bunch of arch system packages, among other things.
Try installing the rocm fork directly, with its script: https://github.com/YellowRoseCx/koboldcpp-rocm?tab=readme-ov-file#linux
EDIT: There does seem to be a specific quirk related to Fedora.
Oh, and again, for raunchy, there are explicit “RP” finetunes, like: https://huggingface.co/TheDrummer
But you just need to set a good system prompt or start a reply with “Sure,” and plain qwen or llama will write out unspeakable things.
What’s the error? Did you manually override your architecture as an environment variable?
https://old.reddit.com/r/ROCm/comments/18z29l6/comment/kgeuguq/
You are gfx1032
Just imagine if the UN had teeth for enforcement, at least for overwhelming votes like this. I feel like its one of the biggest oversights of the post WWII order they tried to make.
Big countries, of course, would never allow that, but still.