▲ | JKCalhoun 2 days ago | |||||||
I've been watching Gary Marcus on BSky — seemingly finding anything to substantiate his loathing of LLMs. I wish he were less biased. To paraphrase Brian Eno, whatever shade yo want to throw at AI, 6 months from now they're going to cast it off and you'll have to find new shade to throw. Having said that, I would be thankful if scaling has hit a wall. Scaling seems to me like the opposite of innovation. | ||||||||
▲ | pegasus 2 days ago | parent | next [-] | |||||||
"whatever shade yo want to throw at AI, 6 months from now they're going to cast it off" - like hallucinations? To me, that was and still is LLM's achilles heel. For the first couple of years we kept hearing assurances that this issue will be soon overcome. Now it seems AI labs have resigned themselves on this issue and just trying to minimize it. Humans make mistaks too, they say. But humans make human mistakes, whereas LLMs often make completely surprising mistakes, because they don't understand the text they're producing, but there's enough intelligence in them to make these mistakes very hard to spot for us humans. | ||||||||
| ||||||||
▲ | unclebucknasty 2 days ago | parent | prev [-] | |||||||
>6 months from now they're going to cast it off and you'll have to find new shade to throw According to the article, it's the opposite. It cites several recent examples wherein AI company leaders have had to walk back claims and admit limits. | ||||||||
|