| ▲ | 2ndorderthought 2 hours ago | |||||||||||||||||||||||||||||||
I've been saying it for a long time now. I think small models are the future for LLMs. It's been fun seeing experiments to see just how much better models get by making them insanely large but it's not sustainable. No I am not saying this model is a drop in Claude replacement. But I think in 2 years we might be really surprised what can be done in a desktop with commodity hardware, no connection to the internet, and a few models that span a subset of tasks. Really happy to see amd put their hat in the ring. It's a good day for amd investors. I know a lot of AI bros will scoff at this, but having your first training run is a big deal for a new lab. AMD is on their way despite Nvidia having years of runway | ||||||||||||||||||||||||||||||||
| ▲ | zimi-24-imiz 2 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||
using C was 100 times as productive as assembly. what happened was not that we finished software 100 times faster, but that we did projects 100 times bigger in the same time same thing with smol local LLMs versus the big ones in the sky. your smol local LLM will only be able to tackle projects which are not comercially valuable anymore, because people expect 100x scope and features. which is fine as a hobby/art project yes, we'll do amazing things with local LLMs in 2 years, but the big LLMs will do things beyond imagination (assembly vs C) | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
| ▲ | steveharing1 an hour ago | parent | prev [-] | |||||||||||||||||||||||||||||||
You couldn't be any more right! | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||