| ▲ | hypercube33 2 hours ago | ||||||||||||||||
All we need is something like Qwen3-coder-next but at Kimi K2.6 ability so it runs on laptop workstation hardware and we are set...soon? | |||||||||||||||||
| ▲ | wolttam an hour ago | parent | next [-] | ||||||||||||||||
In 2023 GPT-4 was allegedly 1.8T parameters. In 2026 we have ~100x smaller models (10-20B) that handily outperform it, and can indeed run on a laptop. | |||||||||||||||||
| |||||||||||||||||
| ▲ | unshavedyak 36 minutes ago | parent | prev [-] | ||||||||||||||||
I am eagerly awaiting being able to run a strong local model. I'd hand Apple $5k right now for a Claude in a box. I know the cost might not be there now, just saying that is around my ideal price point. $10k might even be worth it - but i'm assuming that the more expensive it is the beefier it is too, which also means more electricity.. and i already run ~6 computers/servers in my house. If a power surge happens i'm going to go live in the woods lol. | |||||||||||||||||
| |||||||||||||||||