| ▲ | wmwragg 16 hours ago |
| This is something I think a lot of people don't seem to notice, or worry about, the moving of programming as a local task, to one that is controlled by big corporations, essentially turning programming into a subscription model, just like everything else, if you don't pay the subscription you will no longer be able to code i.e. PaaS (Programming as a Service). Obviously at the moment most programmers can still code without LLMs, but when autocomplete IDEs became main stream, it didn't take long before a large proportion of programmers couldn't program without an autocomplete IDE, I expect most new programmers coming in won't be able to "program" without a remote LLM. |
|
| ▲ | Lio 15 hours ago | parent | next [-] |
| That ignores the possibility that local inference gets good enough to run without a subscription on reasonably priced hardware. I don't think that's too far away. Anthropic, OpenAI, etc. are pushing the idea that you need a subscription but if opensource tools get good enough they could easily become an expensive irrelivance. |
| |
| ▲ | wmwragg 15 hours ago | parent | next [-] | | There is that, but the way this usually works is that there is always a better closed service you have to pay for, and we see that with LLMs as well. Plus there is the fact that you currently need a very powerful machine to run these models at anywhere near the speed of the PaaS systems, and I'm not convinced we'll be able to do the Moore's law style jumps required to get that level of performance locally, not to mention the massive energy requirements, you can only go so small, and we are getting pretty close to the limit. Perhaps I'm wrong, but we don't see the jumps in processing power we used to see in the 80s and 90s, due to clock speed jumps, the clock speed of most CPUs has stayed pretty much the same for a long time. As LLMs are essentially probabilistic in nature, this does open up options not available to current deterministic CPU designs, so that might be an avenue which gets exploited to bring this to local development. | |
| ▲ | epolanski 3 hours ago | parent | prev | next [-] | | Local inference is already very good on open models if you have the hardware for it. | |
| ▲ | flyinglizard 15 hours ago | parent | prev [-] | | My concern is that inference hardware is becoming more and more specialized and datacenter-only. It won’t be possible any longer to just throw in a beefy GPU (in fact we’re already past that point). | | |
| ▲ | wmwragg 11 hours ago | parent [-] | | Yep, good point. If they don't make the hardware available for personal use, then we wouldn't be able to buy it even it could be used in a personal system. |
|
|
|
| ▲ | smallerfish 15 hours ago | parent | prev [-] |
| This is the most valid criticism. Theoretically in several years we may be able to run Opus quality coding models locally. If that doesn't happen then yes, it becomes a pay to play profession - which is not great. |