Remix.run Logo
highfrequency an hour ago

Per the author’s links, he warned that deep learning was hitting a wall in both 2018 and 2022. Now would be a reasonable time to look back and say “whoops, I was wrong about that.” Instead he seems to be doubling down.

tim333 an hour ago | parent | next [-]

The author is a bit of a stopped clock that who has been saying deep learning is hitting a wall for years and I guess one day may be proved right?

He probably makes quite good money as the go to guy for saying AI is rubbish? https://champions-speakers.co.uk/speaker-agent/gary-marcus

jvanderbot 18 minutes ago | parent | next [-]

Well..... tbf. Each approach has hit a wall. It's just that we change things a bit and move around that wall?

But that's certainly not a nuanced / trustworthy analysis of things unless you're a top tier researcher.

espadrine 8 minutes ago | parent [-]

Indeed. A mouse that runs through a maze may be right to say that it is constantly hitting a wall, yet it makes constant progress.

An example is citing Mr Sutskever's interview this way:

> in my 2022 “Deep learning is hitting a wall” evaluation of LLMs, which explicitly argued that the Kaplan scaling laws would eventually reach a point of diminishing returns (as Sutskever just did)

which is misleading, since Sutskever said it didn't hit a wall in 2022[0]:

> Up until 2020, from 2012 to 2020, it was the age of research. Now, from 2020 to 2025, it was the age of scaling

The larger point that Mr Marcus makes, though, is that the maze has no exit.

> there are many reasons to doubt that LLMs will ever deliver the rewards that many people expected.

That is something that most scientists disagree with. In fact the ongoing progress on LLMs has already accumulated tremendous utility which may already justify the investment.

[0]: https://garymarcus.substack.com/p/a-trillion-dollars-is-a-te...

chii an hour ago | parent | prev | next [-]

a contrarian needs to keep spruiking the point, because if he relents, he loses the core audience that listened to him. That's why it's also the same with those who keep predicting market crashes etc.

JKCalhoun 26 minutes ago | parent | prev [-]

I thought the point though was that Sutskever is saying it too.

jayd16 7 minutes ago | parent | prev | next [-]

If something hits a wall and then takes a trillion dollars to move forward but it does move forward, I'm not sure I'd say it was just bluster.

Ukv an hour ago | parent | prev | next [-]

Even further back:

> Yet deep learning may well be approaching a wall, much as I anticipated earlier, at beginning of the resurgence (Marcus, 2012)

(From "Deep Learning: A Critical Appraisal")

bgwalter an hour ago | parent | prev | next [-]

Several OpenAI people said in 2023 that they were surprised by the acceptance of the public. Because they thought that LLMs were not so impressive.

The public has now caught up with that view. Familiarity breeds contempt, in this case justifiably so.

EDIT: It is interesting that in a submission about Sutskever essentially citing Sutskever is downvoted. You can do it here, but the whole of YouTube will still hate "AI".

Jyaif an hour ago | parent [-]

> in this case justifiably so

Oh please. What LLMs are doing now was complete and utter science fiction just 10 years ago (2015).

bgwalter 9 minutes ago | parent | next [-]

Why would the public care what was possible in 2015? They see the results from 2023-2025 and aren't impressed, just like Sutskever.

lisbbb a minute ago | parent | prev | next [-]

What exactly are they doing? I've seen a lot of hype but not much real change. It's like a different way to google for answers and some code generation tossed in, but it's not like LLMs are folding my laundry or mowing my lawn. They seem to be good at putting graphic artists out of work mainly because the public abides the miserable slop produced.

deadbabe 34 minutes ago | parent | prev [-]

Not really.

Any fool could have anticipated the eventual result of transformer architecture if pursued to its maximum viable form.

What is impressive is the massive scale of data collection and compute resources rolled out, and the amount of money pouring into all this.

But 10 years ago, spammers were building simple little bots with markov chains to evade filters because their outputs sounded plausibly human enough. Not hard to see how a more advanced version of that could produce more useful outputs.

Workaccount2 26 minutes ago | parent | next [-]

Any fool could have seen self driving cars coming in 2022. But that didn't happen. And still hasn't happened. But if it did happen, it would be easy to say

"Any fool could have seen this coming in 2012 if they were paying attention to vision model improvements"

Hindsight is 20/20.

free_bip 30 minutes ago | parent | prev [-]

I guess I'm worse than a fool then, because I thought it was totally impossible 10 years ago.

otabdeveloper4 an hour ago | parent | prev [-]

> learning was hitting a wall in both 2018 and 2022

He wasn't wrong though.