▲ | alganet a day ago | ||||||||||||||||||||||||||||||||||
> We will never achieve AGI, because we keep moving the goalposts. I think it's fair to do it to the idea of AGI. Moving the goalpost is often seen as a bad thing (like, shifting arguments around). However, in a more general sense, it's our special human sauce. We get better at stuff, then raise the bar. I don't see a reason why we should give LLMs a break if we can be more demanding of them. > SOTA models are already capable of outperforming any human on earth in a dizzying array of ways, especially when you consider scale. Performance should include energy consumption. Humans are incredibly efficient at being smart while demanding very little energy. > But few if any humans on earth can demonstrate the breadth and depth of competence that a SOTA model possesses. What if we could? What if education mostly stopped improving in 1820 and we're still learning physics at school by doing exercises about train collisions and clock pendulums? | |||||||||||||||||||||||||||||||||||
▲ | K0balt 16 hours ago | parent [-] | ||||||||||||||||||||||||||||||||||
I’m with you on the energy and limitations, and even on the moving of goalposts. I’d like to add that I think limit definition of AGI has jumped the shark though and is already at ASI, since we expect our machine to exhibit professional level acumen across such a wide range of knowledge that it would be similar to the 0.01 percent top career scholars and engineers, or even above any known human capacity just due to breadth of knowledge. And we also expect it to provide that level of focused interaction to a small city of people all at the same time / provide that knowledge 10,000 times faster than any human can. I think definitionally that is ASÍ. But I also think AGI that “we are still chasing” focus-groups a lot better than ASI which is legitimately scary as shit to the average Joe, and which seasoned engineers recognize as a significant threat if controlled by people with misaligned intentions. PR needs us to be “approaching AGI”, not “closing in on ASI”, or we would be pinned down with prohibitive regulatory straitjackets in no time. | |||||||||||||||||||||||||||||||||||
|