▲ | K0balt a day ago | |||||||||||||||||||||||||||||||||||||||||||
We will never achieve AGI, because we keep moving the goalposts. SOTA models are already capable of outperforming any human on earth in a dizzying array of ways, especially when you consider scale. Humans also produce nonsensical, useless output. Lots of it. Yes, LLMs have many limitations that humans easily transcend. But few if any humans on earth can demonstrate the breadth and depth of competence that a SOTA model possesses. Relatively few (probably less than half) are casually capable of the level of reasoning that LLMs exhibit. And, more importantly, as anyone in the field when neural networks were new is aware, AGI never meant human level intelligence until the LLM age. It just meant that a system could generalize one domain from knowledge gained in other domains without supervision or programming. | ||||||||||||||||||||||||||||||||||||||||||||
▲ | rlanday 3 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||||||||
> SOTA models are already capable of outperforming any human on earth in a dizzying array of ways, especially when you consider scale. So why are so many people still employed as e.g. software engineers? People aren’t prompting the models correctly? They’re only asking 10 times instead of 20? They’re holding it wrong? | ||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||
▲ | alganet a day ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||||||||
> We will never achieve AGI, because we keep moving the goalposts. I think it's fair to do it to the idea of AGI. Moving the goalpost is often seen as a bad thing (like, shifting arguments around). However, in a more general sense, it's our special human sauce. We get better at stuff, then raise the bar. I don't see a reason why we should give LLMs a break if we can be more demanding of them. > SOTA models are already capable of outperforming any human on earth in a dizzying array of ways, especially when you consider scale. Performance should include energy consumption. Humans are incredibly efficient at being smart while demanding very little energy. > But few if any humans on earth can demonstrate the breadth and depth of competence that a SOTA model possesses. What if we could? What if education mostly stopped improving in 1820 and we're still learning physics at school by doing exercises about train collisions and clock pendulums? | ||||||||||||||||||||||||||||||||||||||||||||
|