▲ | jwr 5 days ago | |||||||||||||||||||||||||||||||
Hmm. 80B. These days I am on the lookout for new models in the 32B range, since that is what fits and runs comfortably on my MacBook Pro (M4, 64GB). I use ollama every day for spam filtering: gemma3:27b works great, but I use gpt-oss:20b on a daily basis because it's so much faster and comparable in performance. | ||||||||||||||||||||||||||||||||
▲ | bigyabai 4 days ago | parent | next [-] | |||||||||||||||||||||||||||||||
The model is 80b parameters, but only 3b are activated during inference. I'm running the old 2507 Qwen3 30B model on my 8gb Nvidia card and get very usable performance. | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
▲ | jabart 4 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
Can you talk more about how you are using ollama for spam filtering? | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
▲ | electroglyph 5 days ago | parent | prev [-] | |||||||||||||||||||||||||||||||
it'll run great, it's an moe. | ||||||||||||||||||||||||||||||||
|