Remix.run Logo
gobdovan 3 days ago

Hinton is too speculative and inconsistent for me. A reporter outside the AI field even called him out for saying with confidence that only blue collar work will survive AI by pointing out a few years back he said with the same confidence that only creative work will survive.

I can't but compare his takes with Stuart Russell takes, which are so well grounded, coherent and easily presented. I often revisit Stuart Russell discussion with Steven Pinker on AI for the clarity he brings to the topic.

theologic 3 days ago | parent | next [-]

Hinton's citations is 5x what Russell has. There's a good reason why he's won both the Turing and the Nobel Prize. He is just an incredible researcher. I would make the argument that sometimes when you're an incredibly bright, talented person in terms of understanding problems that many other people simply are incapable of following, you're not the right person to be setting expectations how fast a product may ramp into mainstream society.

Russell is much more measured in his statements and much more policy driven.

In my mind you need to listen to both and try to figure out where they're coming from.

moralestapia 2 days ago | parent [-]

>Hinton's citations is 5x what Russell has.

Ugh, Scientism at its best (worst?). Do you also back up Watson statements about race? I'm sure you don't, as that's not part of your training.

Accomplished researchers can say dumb things too, it happens all the time.

aswegs8 2 days ago | parent | next [-]

Experts are often worse than laymen in predicting macro developments because they are narrowed by their focus and biased towards it. Tetlock's Superforecasting is a great book on this.

Some condensed source I found on the topic:

https://www.ing.com/Newsroom/News/The-more-famous-an-expert-...

theologic 2 days ago | parent [-]

I would like to confirm that I've been highly influenced by Tetlock, and I appreciate his work. It was an integral part of a strategy in a business group I was running.

With that stated, if you can see beyond his vitriol, Nassim Taleb has some valid comments toward Tetlock's methodologies. I love Taleb, but hate his tendency to try and shock people. However, he does raise valid concerns about fat tails.

danaris 2 days ago | parent | prev | next [-]

And yet, if an accomplished researcher says something and has ample sourcing to back it up, it's worth paying attention to, even if only to be able to effectively refute it.

Calling it "scientism" to care about these things as a way of dismissing the argument out of hand is anti-intellectualism at its worst.

theologic 2 days ago | parent | next [-]

Although I refer to this above, I do like your comment, and I would like to add the historical context for scientism. It was originally proposed by Hayek, who is one of the fundamental pillars of Austrian economics economics. A school which I am a massive fan, and has heavily influenced my thoughts.

He was basically concerned that a group of rulers were trying to use science to dominate everybody else. I agree with Hayek and his concerns.

The problem is that it has now morphed into a term that can basically mean many things. While a bit of a rat hole, wikipedia does have a great treatment of the term and its journey.

When you use a term like this, which by it's very nature is pejorative, and you use it without regard for its definition, it is not only anti-intellectualism, but also poor communication. It turns into communication that is set up to foster divisions rather than learning.

moralestapia 2 days ago | parent | prev [-]

>Hinton's citations is 5x what Russell has. There's a good reason why he's won both the Turing and the Nobel Prize.

Those are not arguments, that's scientism.

I upvoted you anyway, as you're somehow trying.

theologic 2 days ago | parent | prev [-]

I write this knowing that it is long, and probably written mainly for myself. However, I find that every once in a while, somebody PMs me and says, "wow, that was great," which strokes my ego and makes me want to post. I hope that somebody does read this—in light of the fact that short answers are often not satisfying.

moralestapia's reply contains a series of fallacies and thinking errors.

1. Straw Man Fallacy and Misrepresentation

moralestapia distorted my statement that Hinton is a well-regarded researcher into the idea that I am suggesting any statement he makes about the AI ramp is correct. This is actually the opposite of what I wrote. Despite Hinton's groundbreaking work in AI, I said he is not good at understanding commercial ramps, as talented people often cannot conceive that others cannot follow their vision.

moralestapia created a classic straw man argument: attacking a misrepresented version of the other’s position.

2. Red Herring and Totally Irrelevant Reference

moralestapia brings in Watson's racist viewpoints that got him ostracized.

Watson’s controversial statements about race have no connection to the topic at hand or to the argument being made and serve only to distract.

3a. Anchoring (Behavioral Economics) and Innuendo Effect

This next point comes from my fascination with Kahneman and Tversky—understanding their framework is incredibly important in how we relate. But more than that, I think they basically pull back the curtain and allow us to see what we do to hide the truth.

The first bit of information we’re exposed to disproportionately influences how we interpret subsequent information or people—that’s anchoring. By introducing Watson’s racist statements in response to my comments, he “anchored” the discussion with something emotionally and morally loaded.

The phrase “Do you also back up Watson's statements about race? I’m sure you don’t, as that’s not part of your training.” superficially acknowledges that I am most likely not defending racist statements. However, just raising the hypothetical (“do you also defend X?”) puts the idea into the discussion.

Research in psychology and behavioral economics confirms that once an accusation or association is mentioned, it becomes part of the mental framework—making “I know you are not” less effective at removing the implication than never introducing it in the first place.

In other words, moralestapia does a great job of saying, "look, there seems to be somebody who could possibly support racist thought," by specifically saying, "I know you don't have this viewpoint."

3b. Guilt by Association and Ad Hominem (Implied)

This is slightly more subtle: if we unpack it, moralestapia is suggesting that if I did support Hinton's commercial ramp (which I don't), then somehow I must also accept all of another person’s viewpoints. As referenced above, moralestapia introduces the notion that I may (although he says he doubts it) be a racist.

4. Hasty Generalization

moralestapia's blanket statement “Accomplished researchers can say dumb things too, it happens all the time” is true in a trivial sense, but it’s being used to generalize and dismiss Hinton’s credibility out of hand, regardless of the context or specifics. This is a generalization that bypasses engagement with the actual substance of Hinton’s expertise or my original point.

I’m a bit less picky here because you could argue that this was exactly my point: although Hinton is clearly brilliant, he can't always appreciate that others may not be able to follow or implement his vision.

But specifically, moralestapia doesn't seem to understand or appreciate Hinton. I’ll try to explain how to listen to a brilliant mind and not dismiss him with statements like "can say dumb things too." I think moralestapia is cutting himself off from an enormous amount of learning by making such dismissals or by accusing others of “Scientism.” In this case, I would also submit that moralestapia does not understand the historic context of how this was derived by Hayek. As somebody that leans heavily toward Austrian School of Economics and Hayek, it's funny of being accused of a term that I would say that I agree with as per Hayek's definition.

So, let’s discuss Hinton.

It’s already been cited in this thread that Hinton has lost credibility because he predicted in 2016 that radiologists would be replaced.

But I’d hope that most people can recognize that Hinton is very capable of making a technical judgment, just not necessarily a marketing or commercial one. That’s the root of my comment: listen to him, but understand his background.

From a technical perspective, using a tensor-based approach and all the tools we have today should, in theory, allow us to replace radiologists with AI. There’s nothing technically preventing this if a business case could be made for the investment. From this standpoint, Hinton was—and is—100% correct.

So, why was he so off?

While it’s not a rigorous argument, I’ll just mention that 1 out of every 5 dollars of our GDP is spent on medicine—basically double any other country—and that number is growing faster than inflation or GDP growth.

I’ll propose, without full argument, that a quick review shows the field is flooded with rent-seeking, as defined by Krueger & Tullock.

As an investor in the medical field, you quickly realize that breaking into medicine is a minefield of regulation, bureaucracy, and entrenched interests.

Navigating the labyrinth of government approvals and encountering medical practitioners—who often seem intent on protecting their own positions—can open your eyes to what rent-seeking really looks like. Why do we pay U.S. doctors 200–300% more than Scandinavian doctors? This is not a rigorous argument, but it should highlight that something looks wrong from 50,000 ft.

Penetration into this field is not a technical problem; it’s a political one. Hinton is a talented researcher who finished his career at a massive company called Google, shielded from market realities. That doesn’t mean we should dismiss his ability to perceive the capabilities of the technology. However, to get a valid time frame, we need to process his output.

Finally, do we say that "well I'll just listen to Stuart Russell." While I think you should listen to him, you need to recognize his impact on the technical community is much less. More than this, the specifics of what has redefined AI, which I would like to call a tensor based approach, is NOT were he made his mark in AI. On the flip side of this, Hinton is all about applying Tensors, and is the reason why he is so widely cited and recognized.

chubot 3 days ago | parent | prev | next [-]

I guess it's worth reminding people that in 2016, Geoff Hinton said some pretty arrogant things that turned out to be totally wrong:

Let me start by saying a few things that seem obvious. I think if you work as a radiologist, you're like the coyote that’s already over the edge of the cliff but hasn’t yet looked down

It’s just completely obvious that within five years deep learning is going to do better than radiologists.… It might be 10 years, but we’ve got plenty of radiologists already.”

https://www.youtube.com/watch?v=2HMPRXstSvQ

This article has some good perspective:

https://newrepublic.com/article/187203/ai-radiology-geoffrey...

His words were consequential. The late 2010s were filled with articles that professed the end of radiology; I know at least a few people who chose alternative careers because of these predictions.

---

According to US News, radiology is the 7th best paying job in 2025, and the demand is rising:

https://money.usnews.com/careers/best-jobs/rankings/best-pay...

https://radiologybusiness.com/topics/healthcare-management/h...

I asked AI about radiologists in 2025, and it came up with this article:

https://medicushcs.com/resources/the-radiologist-shortage-ad...

The Radiologist Shortage: Rising Demand, Limited Supply, Strategic Response

(Ironically, this article feels spammy to me -- AI is probably being too credulous about what's written on the web!)

---

I read Cade Metz's book about Hinton and the tech transfer from universities to big tech ... I can respect him for persisting in his line of research for 20-30 years, while others saying he was barking up the wrong tree

But maybe this late life vindication led to a chip on his shoulder

The way he phrased this is remarkably confident and arrogant, and not like the behavior of respected scientist (now with a Nobel Prize) ... It's almost like Twitter-speak that made its way into real life, and he's obviously not from the generation that grew up with Twitter

gobdovan 2 days ago | parent | next [-]

Yeah, even forgot about that... I suppose that the same kind of confidence made him stick with neural nets for so long too, despite mainstream AI thinking it's a dead end. But that's the thing in academia, bold claims get encouraged, as ideas still get you the credit, even if they prove useful decades later and not in the way you imagined.

giardini 3 days ago | parent | prev | next [-]

I wouldn't be too hard on Hinton. Researchers in image processing, geophysics and medicine have been saying the same thing since at least the early 1980's. There was always something coming that was just over the next hill that would take the human out of the loop. That special something always evaporated with time. I suppose it did keep funding coming in.

eloisant 2 days ago | parent [-]

The bottom line is that predicting the future is hard. I'm always skeptical of people who pretend they can.

Of course, because you have different people all predicting a different future, some of them are bound to get it right. That doesn't mean the same person will be right again.

dinfinity 2 days ago | parent | prev | next [-]

I think the key insight is that AI is (undoubtedly going to be) better at analysis and diagnosis than radiologists, but isn't yet widely deployed because:

1. The medical world doesn't accept new technologies easily. Humans get a much higher pass on bad performance than technology and especially than new technology. Things need to be extensively tested and certified, so adoption is slow.

2. AI is legally very different than a radiologist. The liability structure is completely different, which matters a lot in an environment that deals with life or death decisions.

3. Image analysis is not language analysis and generation. This specific machine learning part is not the bit of machine learning that has advanced enormously in the past two years. General knowledge of the world doesn't help that much when the task is to look at pixels and determine whether it's cancer or not. Now this can be improved by integrating the image analysis with all the other possibly relevant information (case history etc.) and diagnosing the case via that route.

chubot 2 days ago | parent [-]

Well maybe, but none of that implies that there will be fewer radiologists will be employed, or that people studying radiology now are fools.

The overwhelming likely thing is that radiologist jobs will change, just like programming jobs will change.

e.g. see my comment on: Did Google and Stack Overflow "replace" programmers?

https://news.ycombinator.com/item?id=43013363

That is, I do not think programmers will be "replaced". The job will just be different; people will come to rely on LLMs for their jobs, like they rely on search engines.

Likewise, you can probably hire fewer doctors now because Google appeared in ~2000, but nobody talked about them being "replaced". There is NOT less demand for doctors.

---

It also reminds me of the prediction around self-driving cars, which is 13+ years ago at this point:

https://news.ycombinator.com/item?id=45149270

I believe Hacker News mostly fell for the hype in ~2012-2016. And even though the predictions turned out to be comically wrong, many people are still attached to them

https://en.wikipedia.org/wiki/List_of_predictions_for_autono...

i.e. I don't think Hinton will be proven "right" with ANY amount of time. The whole framing is just off.

It's not humans xor AI, it's humans + AI. And the world is not static

dinfinity a day ago | parent [-]

The world is indeed not static. That it hasn't happened yet doesn't mean it won't.

Predictions about self driving were off, but far from "comically wrong". Waymo's operations are proof of that.

And to conclude things based on the state of the replacement of programmers after only 2-3 years of ChatGPT being a thing is folly.

The reality is that AI has far fewer limitations and legacy cruft than humans to deal with. Don't get me wrong, I like humans, but our performance is very close to the peak of what it could ever be. That of AI not so much. Remember that AI has been evolving for less than 100 years and it is already where it is today. That took us/biology orders of magnitude more time.

The only real question is how fast it will replace (which) human labor.

meowface 2 days ago | parent | prev | next [-]

To be fair we haven't hit 2026 yet so his prediction might still turn out to be somewhat accurate. But yeah, probably not.

ionwake 2 days ago | parent | prev [-]

the guy invented AI who cares if he is a couple years wrong with a prediction that will come to fruition, jeez. I would call the geezer confident not arrogant. Who is this? Some burned post doc? No offence but you are randomly laying into the guy as if this is Oprah

Freedom2 3 days ago | parent | prev | next [-]

> too speculative and inconsistent for me.

I wonder if he's a HN commenter as well, in that case.

I do appreciate your mention of Stuart Russell however. I've recently been watching a few of his talks and have found them very insightful.

treyfitty 3 days ago | parent | prev [-]

Eh, idk who Hinton is, but I’d cut him some slack for making both statements- I could imagine a case where “creatives” can semantically be understood as “new blue collar.” Musicians, dancers, photographers… are not blue color manufacturing employees, but they are fiscally more similar than their white collar counterparts. It’s possible he used inconsistent terms because he really means “low-wage employees who are far away from the monetary benefit creation decisions,” but that’s a mouthful

gobdovan 3 days ago | parent | next [-]

Hinton is the guy from the article. He is a big figure in AI research.

For context: he once argued AI could handle complex tasks but not drawing or music. Then when Stable Diffusion appeared, he flipped to "AI is creative." Now he's saying carpentry will be the last job to be automated, so people should learn that.

The pattern is sweeping, premature claims about what AI can or can't do that don't age well. His economic framing is similarly simplified to the point of being either trivial or misleading.

SanjayMehta 2 days ago | parent [-]

Carpentry is already partially automated. I’ve worked on cutting algorithms to minimise waste. There are a number of startups which will go from a 3D interior design to manufacturing. Think of customised Ikea.

glitchc 3 days ago | parent | prev [-]

If you don't know who Geoffrey Hinton is, I suggest you make a trip to Wikipedia post haste. Our modern LLM renaissance wouldn't exist without him.

nurettin 3 days ago | parent [-]

Ehhh it sounds like he's a poster boy who rode on the success of others (LeCun, Deepmind) and says whatever the current popular opinion is until proven wrong and shows no hint of predictive capability.

glitchc 2 days ago | parent [-]

Say what? Show some respect, son!

Hinton published the seminal paper on backpropagation. He also invented Boltzmann machines, unsupervised learning and mixture of ecperts models. He championed machine learning for 20 years even though there was zero funding for it through the 80s and 90s. He was Yann LeCun's PhD adviser. That means Yann LeCun didn't know ass from tea kettle until Hinton introduced him to machine learning.

Know perchance a fellow by the name of Ilya Sutskever? ChatGPT ring any bells? Also a student of Hinton's. The list is very long.

pizzalife 2 days ago | parent | next [-]

“Show some respect?”

Do these historical accolades give him a blank check to be wrong in the present?

eloisant 2 days ago | parent [-]

re-read the comment he was responding to.

"sounds like he's a poster boy who rode on the success of others"

The person who wrote that didn't even bother checking who Hinton was before pulling that sentence out of their ass.

nurettin 2 days ago | parent | prev [-]

Frankly, this all sounds like hero worship and the language is very cringe.

I know the backprop paper. I've read it in the early 2000s. And I remember Hinton as a co-author. Same with Boltzmann machines. Co-author. "Advisor to that great guy", "Teacher of this great guy", "Nobel price together with that guy" <- all of this leads me to the above conclusion. YMMV

polotics 2 days ago | parent | next [-]

just one example of the halo effect: having been instrumental in the development of an important technology doesn't magically make one an expert in the economic impact of that technology, as economy is a completely different field of study

glitchc 2 days ago | parent | prev | next [-]

I'm not a fanboy, far from it. I'm not affiliated with the lab or his work. I'm not even a big fan of machine learning. But Hinton's contributions to the field cannot be understated. He single-handedly kept it alive for two decades amid a massive lack of funding. Anyone who has worked in research will attest that this is an incredible feat.

People on Hacker News seems to idolize the lone genius who somehow pulled himself up from his bootstraps. That person does not exist. The truth is that great minds are made, moulded into shape. That the best people behind our AI technology emerged from his lab is no coincidence. Those trash-talking Hinton on this forum are unlikely to achieve 1/100th of what he has accomplished.

“The housecat may mock the tiger,” said the master, “but doing so will not make his purr into a roar.” [1]

[1] http://www.catb.org/esr/writings/unix-koans/end-user.html

nurettin a day ago | parent [-]

Ah, someone who enjoys defending his tigers against cats. Well, enjoy your low hanging fruits, tiger. Someone "so knowledgeable" and totally not a poster boy making bold claims that don't come true at all will continue not making sense to me. But don't get me wrong, I know this is an old man shakes fist at cloud kind of situation. Who cares what doesn't make sense to me anyway? My academic circles don't, and they do fine. I continue to contribute what little I can. But my field is more SR than NN.

sriram_malhar 2 days ago | parent | prev | next [-]

    Frankly, this all sounds like hero worship and the language is very cringe.

"Frankly, I just want to be a contrarian"
nurettin 2 days ago | parent [-]

"Hell hath no fury like a conformist scorned"

qwertytyyuu 2 days ago | parent | prev [-]

Come on there is space for theatrics on hacker news