▲ | doubleunplussed 7 days ago | |||||||||||||||||||
On the other hand, I'm baffled to encounter recursive self-improvement being discussed as something not only weird to expect, but as damning evidence of sloppy thinking by those who speculate about it. We have an existence proof for intelligence that can improve AI: humans. If AI ever gets to human-level intelligence, it would be quite strange if it couldn't improve itself. Are people really that sceptical that AI will get to human level intelligence? It that an insane belief worthy of being a primary example of a community not thinking clearly? Come on! There is a good chance AI will recursively self-improve! Those poo pooing this idea are the ones not thinking clearly. | ||||||||||||||||||||
▲ | tempfile 7 days ago | parent | next [-] | |||||||||||||||||||
Consider that even the named phenomenon is sloppy: "recursive self improvement" does not imply "self improvement without bounds". This is the "what if you hit diminishing returns and never get past it" claim. Absolutely no justification for the jump, ever, among AI boosters. > If AI ever gets to human-level intelligence This picture of intelligence as a numerical scale that you just go up or down, with ants at the bottom and humans/AI at the top, is very very shaky. AI is vulnerable to this problem, because we do not have a definition of intelligence. We can attempt to match up capabilities LLMs seem to have with capabilities humans have, and if the capability is well-defined we may even be able to reason about how stable it is relative to how LLMs work. For "reasoning" we categorically do not have this. There is not even any evidence that LLMs will continue increasing as techniques improve, except in the tautological sense that if LLMs don't appear to resemble humans more closely we will call the technique a failure. IIRC there was a recent paper about giving LLMs more opportunity processing time, and this reduced performance. Same with adding extraneous details, sometimes that reduces performance too. What if eventually everything you try reduces performance? Totally unaddressed. > It that an insane belief worthy of being a primary example of a community not thinking clearly? I really need to stress this: thinking clearly is about the reasoning, not the conclusion. Given the available evidence, no legitimate argument has been presented that implies the conclusion. This does not mean the conclusion is wrong! But just putting your finger in the air and saying "the wind feels right, we'll probably have AGI tomorrow" is how you get bubbles and winters. | ||||||||||||||||||||
| ||||||||||||||||||||
▲ | solid_fuel 7 days ago | parent | prev [-] | |||||||||||||||||||
> We have an existence proof for intelligence that can improve AI: humans. I don't understand what you mean by this. The human brain has not meaningfully changed, biologically, in the past 40,000 years. We, collectively, have built a larger base of knowledge and learned to cooperate effectively enough to make large changes to our environment. But that is not the same thing as recursive self-improvement. No one has been editing our genes or performing brain surgery on children to increase our intelligence or change the fundamental way it works. Modern brains don't work "better" than those of ancient humans, we just have more knowledge and resources to work with. If you took a modern human child and raised them in the middle ages, they would behave like everyone else in the culture that raised them. They would not suddenly discover electricity and calculus just because they were born in 2025 instead of 950. ---- And, if you are talking specifically about the ability to build better AI, we haven't matched human intelligence yet and there is no indication that the current LLM-heavy approach will ever get there. | ||||||||||||||||||||
|