▲ | yoz-y 6 days ago | |
I guess hardware being able to run a local model will eventually get cheap enough, but for a lot of people even buying an Apple device or something with a good enough GPU is prohibitive. | ||
▲ | PeterStuer 6 days ago | parent | next [-] | |
True, it will get cheap to run today's frontier models. But, by that time, how much more advanced will the frontier models of that time be. It is a real question. It all depends on whether the AI future is linear or exponential. | ||
▲ | hadlock 6 days ago | parent | prev [-] | |
I think we are already there. You can run a pretty ok LLM on a 4gb raspberry pi that will write most any simple 20-150 line bash script today, or toy application in python/rust. Old laptops pulled out of the trash are probably capable of running smaller LLMs and can explain how functions work. They're no claude code but you probably want a rough-around-the-edges LLM that can't do everything for you, if you're planning on using it to learn to code. |