Remix.run Logo
atleastoptimal 5 days ago

This is an accurate assessment. I do feel that there is a routine bias on HN to underplay AI. I think it's people not wanting to lose control or relative status in the world.

AI is an existential threat to the unique utility of humans, which has been the last line of defense against absolute despotism (i.e. a tyrannical government will not kill all its citizens because it still needs them to perform jobs. If humans aren't needed to sustain productivity, humans have no leverage against things becoming significantly worse for them, gradually or all at once).

morsecodist 5 days ago | parent | next [-]

> I do feel that there is a routine bias on HN to underplay AI

It's always interesting to see this take because my perception is the exact opposite. I don't think there's ever been an issue for me personally with a bigger mismatch in perceptions than AI. It sometimes feels like the various sides live in different realities.

pmg101 4 days ago | parent | next [-]

It's a Rorschach test isn't it.

Because the technology itself is so young and so nebulous everyone is able to unfalsifiably project their own hopes or fears onto it.

atleastoptimal 5 days ago | parent | prev | next [-]

Any big AI release, some of the top comments are usually claiming either the tech itself is bad, relaying a specific anecdote about some AI model messing up or some study where AI isn't good, or claiming that AI is a huge bubble that will inevitably crash. I've seen the most emphatic denials of the utility of AI here go much farther than anywhere else where criticism of AI is mild skepticism. Among many people it is a matter of tribal warfare that AI=bad.

ACCount37 4 days ago | parent [-]

Coping mechanisms. AI is overhyped and useless and wouldn't ever improve, because the alternative is terrifying.

morsecodist 4 days ago | parent [-]

I'm very skeptical of this psychoanalysis of people who disagree with you. Can't people just be wrong? People are wrong all the time without it being some sort of defense mechanism. I feel this line of thinking puts you in a headspace to write off anything contradictory to your beliefs.

You could easily say that the AI hype is a cope as well. The tech industry and investors need there to be be a hot new technology, their career depends on it. There might be some truth to the coping in either direction but I feel you should try to ignore that and engage with the content of whatever the person is saying or we'll never make any progress.

tim333 4 days ago | parent | prev [-]

I have the impression a lot depends on people's past reading and knowledge of what's going on. If you've read the likes of Kurzweil, Moravec, maybe Turing, you're probably going to treat AGI/ASI as inevitable. For people who haven't they just see these chatbots and the like and think those won't change things much.

It's maybe a bit like the early days of covid when the likes of Trump were saying it's nothing, it'll be over by the spring while people who understood virology could see that a bigger thing was on the way.

morsecodist 3 days ago | parent [-]

These people's theories (except Turing) are highly speculative predictions about the future. They could be right but they are not analogous to the predictions we get out of epidemiology where we have had a lot of examples to study. What they are doing is not science and it is way more reasonable to doubt them.

tim333 3 days ago | parent [-]

The Moravec stuff I'd say is more moderately speculative than highly. All he really said is compute power had tended to double every so long and if that keeps up we'll have human brain equivalent computer in cheap devices in the 2020s. That bit wasn't really a stretch and has largely proved true.

The more unspoken speculative bit is there will then be a large economic incentive for bright researchers and companies to put a lot of effort into sorting the software side. I don't consider LLMs to do the job of general intelligence but there are a lot of people trying to figure it out.

Given we have general intelligence and are the product of ~2GB of DNA, the design can't be that impossible complex, although likely a bit more than gradient descent.

AIPedant 4 days ago | parent | prev | next [-]

> it's people not wanting to lose control or relative status in the world.

It's amazing how widespread this belief is among the HN crowd, despite being a shameless ad hominem with zero evidence. I think there are a lot of us who assume the reasonable hypothesis is "LLMs are a compelling new computing paradigm, but researchers and Big Tech are overselling generative AI due to a combination of bad incentives and sincere ideological/scientific blindness. 2025 artificial neural networks are not meaningfully intelligent." There has not been sufficient evidence to overturn this hypothesis and an enormous pile of evidence supporting it.

I do not necessarily believe humans are smarter than orcas, it is too difficult to say. But orcas are undoubtedly smarter than any AI system. There are billions of non-human "intelligent agents" on planet Earth to compare AI against, and instead we are comparing AI to humans based on trivia and trickery. This is the basic problem with AI, and it always has had this problem: https://dl.acm.org/doi/10.1145/1045339.1045340 The field has always been flagrantly unscientific, and it might get us nifty computers, but we are no closer to "intelligent" computing than we were when Drew McDermott wrote that article. E.g. MuZero has zero intelligence compared to a cockroach; instead of seriously considering this claim AI folks will just sneer "are you even dan in Go?" Spiders are not smarter than beavers even if their webs seem more careful and intricate than beavers' dams... that said it is not even clear to me that our neural networks are capable of spider intelligence! "Your system was trained on 10,000,00 outdoor spiderwebs between branches and bushes and rocks and has super-spider performance in those domains... now let's bring it into my messy attic."

thrw045 5 days ago | parent | prev | next [-]

I think AI is still in the weird twilight zone that it was when it first came out in that it's great sometimes and also terrible. I still get hallucinations when I check a response I get with ChatGPT on Google.

On the one hand, what it says can't be trusted, on the other, I have debugged code I have written where I was unable to find the bug myself, and ChatGPT found it.

I also think a reason AI's are popular and the companies haven't gone under is that probably hundreds of thousands if not millions of people are getting responses that have hallucinations, but the user doesn't know it. I fell into this trap myself after ChatGPT first came out. I became addicted to asking anything and it seemed like it was right. It wasn't until later I started realizing that it was hallucinating information. How prevalent this phenomena is is hard to say but I still think it's pernicious.

But as I said before, there are still use cases for AI and that's what makes judging it so difficult.

wavemode 4 days ago | parent | prev | next [-]

I certainly understand why lots of people seem to believe LLMs are progressing towards beocming AGI. What I don't understand is the constant need to absurdly psychoanalyze the people who happen to disagree.

No, I'm not worried about losing "control or relative status in the world". (I'm not worried about losing anything, frankly - personally I'm in a position where I would benefit financially if it became possible to hire AGIs instead of humans.)

You don't get to just assert things without proof (LLMs are going to become AGI) and then state that anyone who is skeptical of your lack of proof must have something wrong with them.

iphone_elegance 4 days ago | parent | prev | next [-]

lmao, "underplay ai" that's all this site has been about for the last few years

prairieroadent 5 days ago | parent | prev [-]

[dead]