▲ | NicoJuicy 4 days ago | |
If you have a 24 gb 3090. Try out qwen:30b-a3b-instruct-2507-q4_K_M ( ollama ) It's pretty good. | ||
▲ | naabb 3 days ago | parent | next [-] | |
tbf I also run that on a 16GB 5070TI at 25T/S, it's amazing how fast it runs on consumer grade hardware. I think you could push up to a bigger model but I don't know enough about local llama. | ||
▲ | jszymborski 3 days ago | parent | prev [-] | |
Don't need a 3090, it runs really fast on an RTX 2080 too. |