| ▲ | gentooflux 16 hours ago | ||||||||||||||||
It's a zero sum game. AI cannot innovate, it can only predictively generate code based on what it's already seen. If we get to a point where new code is mostly or only written by AI, nothing new emerges. No new libraries, no new techniques, no new approaches. Fewer and fewer real developers means less and less new code. | |||||||||||||||||
| ▲ | edg5000 15 hours ago | parent | next [-] | ||||||||||||||||
Nonsense indeed. The model knowledge is the current state of the art. Any computation it does, advances it. It re-ingests work of prior agents every time you run it on your codebase, so even though the model initializes the same way (until they update the model), upon repeated calls it ingests more and more novel information, inching the state of the art ever forwards. | |||||||||||||||||
| |||||||||||||||||
| ▲ | vanviegen 16 hours ago | parent | prev [-] | ||||||||||||||||
Nonsense. LLMs can easily build novel solutions based on my descriptions. Even in languages and with (proprietary) frameworks they have not been trained on, given a tiny bit of example code and the reference docs. | |||||||||||||||||
| |||||||||||||||||