▲ | grim_io 6 days ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
No matter how much better they get, I don't see any actual sign of intelligence, do you? There is a lot of handwaving around the definition of intelligence in this context, of course. My definition would be actual on the job learning and reliability i don't need to second guess every time. I might be wrong, but those 2 requirements seem not compatible with current approach/hardware limitations. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | muldvarp 6 days ago | parent [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Intelligence doesn't matter. To quote "Superintelligence: Paths, Dangers, Strategies": > There is an important sense, however, in which chess-playing AI turned out to be a lesser triumph than many imagined it would be. It was once supposed, perhaps not unreasonably, that in order for a computer to play chess at grandmaster level, it would have to be endowed with a high degree of general intelligence. The same thing might happen with LLMs and software engineering: LLMs will not be considered "intelligent" and software engineering will no longer be thought of as something requiring "actual intelligence". Yes, current models can't replace software engineers. But they are getting better at it with every release. And they don't need to be as good as actual software engineers to replace them. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|