Remix.run Logo
christina97 5 hours ago

I recently set up the 26B A4B model up on vLLM on an RTX3090 (4-bit) after a hiatus from local models. Just completely blown away by the speed and quality you can get now for sub-$1k investment.

I tried first with Qwen but it was unstable and had ridiculously long thinning traces!

2ndorderthought 4 hours ago | parent | next [-]

Some of the early quants for qwen3.6 were broken. It's still finicky but with a little hand holding it's crazy.

Local models are the future it's awesome

aimxhaisse 3 hours ago | parent | prev | next [-]

It even fits on a 3060 with turboquant / Q4 at decent speed (40T/s) for ~200$ (:

jszymborski 5 hours ago | parent | prev | next [-]

The A4B model is blazing fast and the model is super good at general inquiries. Notably worse than Qwen 3.6 for coding tasks but that says more about the Qwen model.

maille 38 minutes ago | parent [-]

Bad at coding, but would it be good at code review?

moffkalast 2 hours ago | parent | prev [-]

The 31B is surprisingly fast too, for a dense model. Runs tg at least twice as fast as it ought to on my machine when compared to other 30B, probably due to the hybrid attention I guess. Ingestion is somewhat slower though.