▲ | tempfile 8 days ago | |||||||||||||||||||||||||||||||||||||||||||||||||
I honestly am not trying to be rude when I say this, but this is exactly the sort of speculation I find problematic and that I think most people in this thread are complaining about. Being able to tell Claude to have a go has no relation at all to whether it may ever succeed, and you don't actually address any of the legitimate concerns the comment you're replying to points out. There really isn't anything in this comment but vibes. | ||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | tim333 7 days ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||
I don't think it's vibes rather than my thinking about the problem. If you look at the "legitimate concerns" none are really deal breakers: >What if "increasing intelligence", which is a very vague goal, has diminishing returns, making recursive self-improvement incredibly slow? I'm will to believe it will be slow though maybe it won't >LLMs already seem to have hit a wall of diminishing returns Who cares - there will be other algorithms >What if there are several paths to different kinds of intelligence with their own local maxima well maybe, maybe not >Once AI realizes it can edit itself to be more intelligent, it can also edit its own goals. Why wouldn't it wirehead itself? well - you can make another one if the first does that Those are all potential difficulties with self improvement, not reasons it will never happen. I'm happy to say it's not happening right now but do you have any solid arguments that it won't happen in the next century? To me the arguments against sound like people in the 1800s discussing powered flight and saying it'll never happen because steam engine development has slowed. | ||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | doubleunplussed 7 days ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||||||||||||||
On the other hand, I'm baffled to encounter recursive self-improvement being discussed as something not only weird to expect, but as damning evidence of sloppy thinking by those who speculate about it. We have an existence proof for intelligence that can improve AI: humans. If AI ever gets to human-level intelligence, it would be quite strange if it couldn't improve itself. Are people really that sceptical that AI will get to human level intelligence? It that an insane belief worthy of being a primary example of a community not thinking clearly? Come on! There is a good chance AI will recursively self-improve! Those poo pooing this idea are the ones not thinking clearly. | ||||||||||||||||||||||||||||||||||||||||||||||||||
|