▲ | moralestapia 2 days ago | |||||||||||||
>Hinton's citations is 5x what Russell has. Ugh, Scientism at its best (worst?). Do you also back up Watson statements about race? I'm sure you don't, as that's not part of your training. Accomplished researchers can say dumb things too, it happens all the time. | ||||||||||||||
▲ | aswegs8 2 days ago | parent | next [-] | |||||||||||||
Experts are often worse than laymen in predicting macro developments because they are narrowed by their focus and biased towards it. Tetlock's Superforecasting is a great book on this. Some condensed source I found on the topic: https://www.ing.com/Newsroom/News/The-more-famous-an-expert-... | ||||||||||||||
| ||||||||||||||
▲ | danaris 2 days ago | parent | prev | next [-] | |||||||||||||
And yet, if an accomplished researcher says something and has ample sourcing to back it up, it's worth paying attention to, even if only to be able to effectively refute it. Calling it "scientism" to care about these things as a way of dismissing the argument out of hand is anti-intellectualism at its worst. | ||||||||||||||
| ||||||||||||||
▲ | theologic 2 days ago | parent | prev [-] | |||||||||||||
I write this knowing that it is long, and probably written mainly for myself. However, I find that every once in a while, somebody PMs me and says, "wow, that was great," which strokes my ego and makes me want to post. I hope that somebody does read this—in light of the fact that short answers are often not satisfying. moralestapia's reply contains a series of fallacies and thinking errors. 1. Straw Man Fallacy and Misrepresentation moralestapia distorted my statement that Hinton is a well-regarded researcher into the idea that I am suggesting any statement he makes about the AI ramp is correct. This is actually the opposite of what I wrote. Despite Hinton's groundbreaking work in AI, I said he is not good at understanding commercial ramps, as talented people often cannot conceive that others cannot follow their vision. moralestapia created a classic straw man argument: attacking a misrepresented version of the other’s position. 2. Red Herring and Totally Irrelevant Reference moralestapia brings in Watson's racist viewpoints that got him ostracized. Watson’s controversial statements about race have no connection to the topic at hand or to the argument being made and serve only to distract. 3a. Anchoring (Behavioral Economics) and Innuendo Effect This next point comes from my fascination with Kahneman and Tversky—understanding their framework is incredibly important in how we relate. But more than that, I think they basically pull back the curtain and allow us to see what we do to hide the truth. The first bit of information we’re exposed to disproportionately influences how we interpret subsequent information or people—that’s anchoring. By introducing Watson’s racist statements in response to my comments, he “anchored” the discussion with something emotionally and morally loaded. The phrase “Do you also back up Watson's statements about race? I’m sure you don’t, as that’s not part of your training.” superficially acknowledges that I am most likely not defending racist statements. However, just raising the hypothetical (“do you also defend X?”) puts the idea into the discussion. Research in psychology and behavioral economics confirms that once an accusation or association is mentioned, it becomes part of the mental framework—making “I know you are not” less effective at removing the implication than never introducing it in the first place. In other words, moralestapia does a great job of saying, "look, there seems to be somebody who could possibly support racist thought," by specifically saying, "I know you don't have this viewpoint." 3b. Guilt by Association and Ad Hominem (Implied) This is slightly more subtle: if we unpack it, moralestapia is suggesting that if I did support Hinton's commercial ramp (which I don't), then somehow I must also accept all of another person’s viewpoints. As referenced above, moralestapia introduces the notion that I may (although he says he doubts it) be a racist. 4. Hasty Generalization moralestapia's blanket statement “Accomplished researchers can say dumb things too, it happens all the time” is true in a trivial sense, but it’s being used to generalize and dismiss Hinton’s credibility out of hand, regardless of the context or specifics. This is a generalization that bypasses engagement with the actual substance of Hinton’s expertise or my original point. I’m a bit less picky here because you could argue that this was exactly my point: although Hinton is clearly brilliant, he can't always appreciate that others may not be able to follow or implement his vision. But specifically, moralestapia doesn't seem to understand or appreciate Hinton. I’ll try to explain how to listen to a brilliant mind and not dismiss him with statements like "can say dumb things too." I think moralestapia is cutting himself off from an enormous amount of learning by making such dismissals or by accusing others of “Scientism.” In this case, I would also submit that moralestapia does not understand the historic context of how this was derived by Hayek. As somebody that leans heavily toward Austrian School of Economics and Hayek, it's funny of being accused of a term that I would say that I agree with as per Hayek's definition. So, let’s discuss Hinton. It’s already been cited in this thread that Hinton has lost credibility because he predicted in 2016 that radiologists would be replaced. But I’d hope that most people can recognize that Hinton is very capable of making a technical judgment, just not necessarily a marketing or commercial one. That’s the root of my comment: listen to him, but understand his background. From a technical perspective, using a tensor-based approach and all the tools we have today should, in theory, allow us to replace radiologists with AI. There’s nothing technically preventing this if a business case could be made for the investment. From this standpoint, Hinton was—and is—100% correct. So, why was he so off? While it’s not a rigorous argument, I’ll just mention that 1 out of every 5 dollars of our GDP is spent on medicine—basically double any other country—and that number is growing faster than inflation or GDP growth. I’ll propose, without full argument, that a quick review shows the field is flooded with rent-seeking, as defined by Krueger & Tullock. As an investor in the medical field, you quickly realize that breaking into medicine is a minefield of regulation, bureaucracy, and entrenched interests. Navigating the labyrinth of government approvals and encountering medical practitioners—who often seem intent on protecting their own positions—can open your eyes to what rent-seeking really looks like. Why do we pay U.S. doctors 200–300% more than Scandinavian doctors? This is not a rigorous argument, but it should highlight that something looks wrong from 50,000 ft. Penetration into this field is not a technical problem; it’s a political one. Hinton is a talented researcher who finished his career at a massive company called Google, shielded from market realities. That doesn’t mean we should dismiss his ability to perceive the capabilities of the technology. However, to get a valid time frame, we need to process his output. Finally, do we say that "well I'll just listen to Stuart Russell." While I think you should listen to him, you need to recognize his impact on the technical community is much less. More than this, the specifics of what has redefined AI, which I would like to call a tensor based approach, is NOT were he made his mark in AI. On the flip side of this, Hinton is all about applying Tensors, and is the reason why he is so widely cited and recognized. |