Remix.run Logo
gwern 2 days ago

[flagged]

djmips a day ago | parent | next [-]

your browser must be REALLY slow - his name is the first thing after the title when you go to the article.

EA-3167 a day ago | parent [-]

I don't believe that these comments (and if you scroll down there are quite a few nearly identical ones) are intended to do anything more than find an HN-safe way of expressing an ad hom attack with the hope that people won't read it or engage with the comments. I want to find a more charitable interpretation, but it's very difficult.

I kind of get it though, presumably a large number of people here have paychecks that depend on him being wrong, or least the perception that he's wrong for a while.

unclebucknasty a day ago | parent [-]

>a large number of people here have paychecks that depend on him being wrong

I was thinking earlier that it's mildly fascinating how some people are more annoyed by the hype, and others by the anti-hype. Thought it'd be interesting to know whether these things are part of our psychological profiles, contributed to political-leanings, etc.

But, now that you mention it, yeah—it might just be the money.

unclebucknasty 2 days ago | parent | prev | next [-]

I've seen this characterization of Marcus here, and it seems to follow the sentiment of the AI leaders he referenced in the article.

But, I've yet to see where he's been wrong (or, particularly more wrong than the AI-thinking and leadership he's questioning). Do you have any citations?

Also, if you stopped on seeing his name, I'd encourage you to take another look—specifically the sections wherein he discusses AI-leadership's prior dismissal of his doubts and their subsequent walk-backs of their own claims.

Would be interested in your take on that.

xiphias2 2 days ago | parent [-]

Reasoning LLMs getting better at ARC-AGI prove that they are able to solve tasks that are symbolic without putting in specific search on the CPU (which is the brute force method).

It's never ,,pure scaling'' (just running the same algorithm on more hardware), but there's continues improvement on how to be even more algorithmically efficient (and algorithmic scaling is faster than hardware scaling).

unclebucknasty a day ago | parent [-]

>Reasoning LLMs getting better at ARC-AGI prove...

Even if true, it wouldn't be dispositive WRT my question, but...

1. Strictly speaking, LLMs themselves aren't capable of reasoning, by definition. Without external techniques, they are only capable of simulating reasoning, and so exhibiting reasoning-like behavior.

2. It's known that at least some up to most progress on the test has been the result of specific tuning for the test ("cheating") versus any emergent AGI. [0]

>It's never "pure scaling"

Oh, but it was. There's absolutely been a focus on pure scaling as the proposal for significant progress and some prominent proponents have had to walk back their expectations/claims.

I think there's a little bit of revisionism going on, as they want past claims to be quickly forgotten. The interesting part is that the scaling mantra is starting anew with the new reasoning techniques.

[0] https://www.lesswrong.com/posts/KHCyituifsHFbZoAC/arc-agi-is...

a day ago | parent | prev | next [-]
[deleted]
FergusArgyll 2 days ago | parent | prev [-]

I figured it out without having to click the link....

I do agree with the commenter here that it's good to hear from people who have wildly different views. He is more annoying than "The market is gonna crash tommorow" guy, though