| ▲ | mopierotti 2 hours ago | |||||||
This (+ llmfit) are great attempts, but I've been generally frustrated by how it feels so hard to find any sort of guidance about what I would expect to be the most straightforward/common question: "What is the highest-quality model that I can run on my hardware, with tok/s greater than <x>, and context limit greater than <y>" (My personal approach has just devolved into guess-and-check, which is time consuming.) When using TFA/llmfit, I am immediately skeptical because I already know that Qwen 3.5 27B Q6 @ 100k context works great on my machine, but it's buried behind relatively obsolete suggestions like the Qwen 2.5 series. I'm assuming this is because the tok/s is much higher, but I don't really get much marginal utility out of tok/s speeds beyond ~50 t/s, and there's no way to sort results by quality. | ||||||||
| ▲ | comboy an hour ago | parent | next [-] | |||||||
What is the $/Mtok that would make you choose your time vs savings of running stuff locally? Just to be clear, it may sound like a snarky comment but I'm really curious from you or others how do you see it. I mean there are some batches long running tasks where ignoring electricity it's kind of free but usually local generation is slower (and worse quality) and we all kind of want some stuff to get done. Or is it not about the cost at all, just about not pushing your data into the clouds. | ||||||||
| ▲ | J_Shelby_J 2 hours ago | parent | prev | next [-] | |||||||
It’s a hard problem. I’ve been working on it for the better part of a year. Well, granted my project is trying to do this in a way that works across multiple devices and supports multiple models to find the best “quality” and the best allocation. And this puts an exponential over the project. But “quality” is the hard part. In this case I’m just choosing the largest quants. | ||||||||
| ||||||||
| ▲ | downrightmike 2 hours ago | parent | prev [-] | |||||||
LLMs are just special purpose calculators, as opposed to normal calculators which just do numbers and MUST be accurate. There aren't very good ways of knowing what you want because the people making the models can't read your mind and have different goals | ||||||||