| ▲ | grayhatter 4 hours ago | |
> But a short line "AGI is possible, powerful and perilous" is something 9 out of 10 of frontier AI researchers at the frontier labs would agree upon. > At which point the question becomes: is it them who are deluded, or is it you? Given the current very asymptotic curve of LLM quality by training, and how most of the recent improvements have been better non LLM harnesses and scaffolding. I don't find the argument that transformer based Generative LLMs are likely to ever reach something these labs would agree is AGI (unless they're also selling it as it) Then, you can apply the same argument to Natural General Intelligence. Humans can do both impressive and scary stuff. I'll ignore the made up 5 and 25%, and instead suggest that pragmatic and optimistic/predictive world views don't conflict. You can predict the magic word box you feel like you enjoy is special and important, making it obvious to you AGI is coming. While it also doesn't feel like a given to people unimpressed by it's painfully average output. The problem being the optimism that Transformer LLMs will evolve into AGI requires a break through that the current trend of evidence doesn't support. Will humans invent AGI? I'd bet it's a near certainty. Is general intelligence impressive and powerful? Absolutely, I mean look, Organic general intelligence invented artificial general intelligence in the future... assuming we don't end civilization with nuclear winter first... | ||