Remix.run Logo
unclebucknasty 2 days ago

I've seen this characterization of Marcus here, and it seems to follow the sentiment of the AI leaders he referenced in the article.

But, I've yet to see where he's been wrong (or, particularly more wrong than the AI-thinking and leadership he's questioning). Do you have any citations?

Also, if you stopped on seeing his name, I'd encourage you to take another look—specifically the sections wherein he discusses AI-leadership's prior dismissal of his doubts and their subsequent walk-backs of their own claims.

Would be interested in your take on that.

xiphias2 2 days ago | parent [-]

Reasoning LLMs getting better at ARC-AGI prove that they are able to solve tasks that are symbolic without putting in specific search on the CPU (which is the brute force method).

It's never ,,pure scaling'' (just running the same algorithm on more hardware), but there's continues improvement on how to be even more algorithmically efficient (and algorithmic scaling is faster than hardware scaling).

unclebucknasty a day ago | parent [-]

>Reasoning LLMs getting better at ARC-AGI prove...

Even if true, it wouldn't be dispositive WRT my question, but...

1. Strictly speaking, LLMs themselves aren't capable of reasoning, by definition. Without external techniques, they are only capable of simulating reasoning, and so exhibiting reasoning-like behavior.

2. It's known that at least some up to most progress on the test has been the result of specific tuning for the test ("cheating") versus any emergent AGI. [0]

>It's never "pure scaling"

Oh, but it was. There's absolutely been a focus on pure scaling as the proposal for significant progress and some prominent proponents have had to walk back their expectations/claims.

I think there's a little bit of revisionism going on, as they want past claims to be quickly forgotten. The interesting part is that the scaling mantra is starting anew with the new reasoning techniques.

[0] https://www.lesswrong.com/posts/KHCyituifsHFbZoAC/arc-agi-is...