| ▲ | Show HN: Find the best local LLM for your hardware, ranked by benchmarks(github.com) | ||||||||||||||||||||||||||||||||||
| 212 points by andyyyy64 4 hours ago | 32 comments | |||||||||||||||||||||||||||||||||||
| ▲ | karmakaze 44 minutes ago | parent | next [-] | ||||||||||||||||||||||||||||||||||
Not perfect, but I find the artificialanalysis.ai "Intelligence vs. Output Tokens Used in Artificial Analysis Intelligence Index" chart[0] (scroll down to the titled chart) to be of great use. A proper evaluation needs to compare 3 things together: score, speed, and verbosity. This chart plots score vs verbosity. [0] https://artificialanalysis.ai/?models=gpt-oss-120b%2Cgemma-4... | |||||||||||||||||||||||||||||||||||
| ▲ | jordiburgos 2 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
This is very helpful too: https://www.canirun.ai/ | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| ▲ | pornel 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
It looks nice. I've been searching for something like this recently, and was frustrated with rankings that lack latest models or don't clearly distinguish quantizations. Showing quality loss per quantization is nice. I'd prefer this as a website, since I'd handle running of the model with a dedicated inference server anyway. It would be nice to see what's the maximum context length that can fit on top of the baseline. I was surprised how much token generation speed tanks when using very long context. 30/s can drop down to 2/s. A single speed metric didn't prepare me for that. I was also positively surprised that some models scale well with batch parallelism. I can get 4x speed improvement by running 8 requests in parallel. But this affects memory requirements, and doesn't apply to all models and inference engines. It would be nice to show that. Some sites fold it into "what's your workflow", but that's too opaque. KV cache quantization also makes a difference for speed, VRAM usage and max usable context. On Apple Silicon MLX-compatible model builds make a difference, so I'd like to see benchmarks reassure they're based on the fastest implementation. Multi-token-prediction is another aspect that may substantially change speed. | |||||||||||||||||||||||||||||||||||
| ▲ | karmakaze 20 minutes ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
Is there any free hosting for Python scripts? That would be much more convenient for casual use. | |||||||||||||||||||||||||||||||||||
| ▲ | rafram 3 minutes ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
OP is a newish user, all of their responses here are copied straight from Claude, and this project has an LLM slop readme (count them: 48 em-dashes on the page!) and LLM code. Just not very interesting. | |||||||||||||||||||||||||||||||||||
| ▲ | armcat 34 minutes ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
Interesting concept! A suggestion: `whichllm <USE_CASE>` would be more beneficial, i.e. `which coding` or `which text-to-video`. | |||||||||||||||||||||||||||||||||||
| ▲ | Bigsy 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
Brew install is broken It seems pretty rubbish I have to say, its recommending me loads of qwen 2.5 which are really old and I'm easy running qwen3.5 and 3.6 models on this mac at decent quants | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| ▲ | zkmon an hour ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
"Best LLM" doesn't really depend on hardware alone. It actually depends more on your needs - type of workload, context length needed etc. | |||||||||||||||||||||||||||||||||||
| ▲ | llagerlof 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
What’s new regarding llmfit? | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| ▲ | sleepyeldrazi 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
I love this community, I started building a simple website for this exactly a couple of hours ago and you made an even more advanced version already. Hats off to you sir. If i ever decide to actually publish the site, is it alright if I mention you somewhere as a "If you want a more accurate estimation, check out this project:<your repo>", as i think there is value in having a simple website estimate this information for you, and give you instructions/ common flags on how to start it yourself (also a prompt crafted for you to optionally give to an llm to set it up for you), but im going off simple "choose an os, gpu/vram, here's a list of options" and not actually scanning (which is a lot more accurate). | |||||||||||||||||||||||||||||||||||
| ▲ | Jasssss 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
The plan command is clever. How do you handle the VRAM estimation for models with sliding window attention vs full context? Something like Mistral at 32k context uses way less KV cache than Llama at the same context length, but from the README it looks like the estimation is based on a fixed context size. Does it account for that? | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| ▲ | s3anw3 13 minutes ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
Good job. | |||||||||||||||||||||||||||||||||||
| ▲ | justindotdev 44 minutes ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
it'd be nice if it had igpu support, it cant even detect it. overall great tool though. happy this exists. | |||||||||||||||||||||||||||||||||||
| ▲ | wald3n an hour ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
Cool idea, thanks for making this | |||||||||||||||||||||||||||||||||||
| ▲ | andai an hour ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
Has anyone gotten the old gpt-oss models running? They scored very high on benchmarks but I constantly had strange problems with them. So two questions there: (1) is it actually possible to get good results with them (some people said they got good results, which implies that it might have been hard to get them running properly, but if you can, then they're actually good?). Which also implies the second question, (2) are benchmarks a spook? --- ...Also, is OP Claude? | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| ▲ | kramit1288 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
accurate memory estimation is key here. it will crash if that accurate and it cant be generic for all local llm. each local llm has different context estimates. | |||||||||||||||||||||||||||||||||||
| ▲ | cyanydeez 2 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
This doesn't correclty detect the unified memory architecture for GPU 0: STRXLGEN — 8.0 GB (ROCm 6.19.8-200.fc43.x86_64) — BW: N/A CPU: AMD RYZEN AI MAX+ 395 w/ Radeon 8060S — 16 cores (AVX2, AVX-512) The 8GB is the reserved memory, but it's not the total available memory to the GPU. Linux sets the unified memory like this on linux: https://www.jeffgeerling.com/blog/2025/increasing-vram-alloc... Don't feel bad though, nvtop doesn't do it correctly either. | |||||||||||||||||||||||||||||||||||
| ▲ | macwhisperer 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
can you add in the other quants like IQ3_M? also my personal simple rule of thumb for local ai sizing is: max model size (GB) = ram (GB) / 1.65 | |||||||||||||||||||||||||||||||||||
| ▲ | pbronez 2 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||
Cool, but it looks like it doesn’t actually test anything on your machine? It does hardware detection and then some lookups. Maybe I missed it but I really want a tool like this to actually run a model on my machine to get the speed numbers. I’ve been using RapidMLX for this. The integrated speed tests matter because the quality of the backend is a moving target and the quantization / MLX format conversion also matter. It’s not enough to say “oh use this model family with X parameters” you have to add the architecture specific quantization too. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||