| ▲ | ajshahH 2 hours ago | |
This implies programming is done and there will be no other advancements. And flattening is being seen, no? Recent advancements are mostly from RL’ing, which has limitations (and tradeoffs) too. Are there more tricks after that? | ||
| ▲ | Verdex 40 minutes ago | parent [-] | |
Yeah, even the AI CEOs are admitting that training scaling is over. They claim that we can keep the party going with post training scaling, which I personally find hard to believe but I'm not really up to speed on those techs. I mean, maybe you can just keep an eye on what people are using the tools for and then monkey patch your way to sufficiently agi. I'll believe it when we're all begging outside the data centers for bread. [Based on other history of science and technology advancements since the stone ages, I would place agi at 200-500 years out at least. You have to wait decades after a new toy is released for everyone to realize everything they knew was wrong and then the academics get to work then everyone gets complacent then new accidental discovery produces a new toy etc.] | ||