| ▲ | avazhi 7 hours ago | |||||||
> What stops you from running the best open weighted LLMs currently available on consumer grade hardware for the rest of time? Uh… the hardware requirements? And stop acting like some dog shit 8B model the average Joe can run on a laptop is even close to being comparable to what Claude or even Codex can currently do. I have pretty good hardware and I’ve tinkered with the best sub-150B models you can use and they are awful compared to Anthropic/OAI/Grok. | ||||||||
| ▲ | apsurd 7 hours ago | parent | next [-] | |||||||
What if the harness and loops get sufficiently better though? CC is using haiku for code-base gripping and such, you don't see a local commodity model being "good enough" for the 80% case when matched with better harnesses and tool calls? honest question, i'm very interested in this, but too casual as of now to know any better. | ||||||||
| ||||||||
| ▲ | apublicfrog 3 hours ago | parent | prev [-] | |||||||
> And stop acting like some dog shit 8B model the average Joe can run on a laptop is even close to being comparable to what Claude or even Codex can currently do. I'm not, you've actually illustrated my point. LLMs in 2022 were very impressive. By 2024 the general public was finding them an acceptable replacement for many research driven tasks and massive shortcuts for other tasks (coding, image work, document preperation, etc). Those models are absolutely runnable on consumer hardware now, and we were extremely happy with the results. It's no different to how we used to think CRTs were amazing or early smartphones, but going back now they seem awful. We're long past "danger". If what we have is the best we'll ever have open source, we're already in an excellent position. | ||||||||