| ▲ | morsecodist 5 days ago |
| > I do feel that there is a routine bias on HN to underplay AI It's always interesting to see this take because my perception is the exact opposite. I don't think there's ever been an issue for me personally with a bigger mismatch in perceptions than AI. It sometimes feels like the various sides live in different realities. |
|
| ▲ | pmg101 4 days ago | parent | next [-] |
| It's a Rorschach test isn't it. Because the technology itself is so young and so nebulous everyone is able to unfalsifiably project their own hopes or fears onto it. |
|
| ▲ | atleastoptimal 5 days ago | parent | prev | next [-] |
| Any big AI release, some of the top comments are usually claiming either the tech itself is bad, relaying a specific anecdote about some AI model messing up or some study where AI isn't good, or claiming that AI is a huge bubble that will inevitably crash. I've seen the most emphatic denials of the utility of AI here go much farther than anywhere else where criticism of AI is mild skepticism. Among many people it is a matter of tribal warfare that AI=bad. |
| |
| ▲ | ACCount37 4 days ago | parent [-] | | Coping mechanisms. AI is overhyped and useless and wouldn't ever improve, because the alternative is terrifying. | | |
| ▲ | morsecodist 4 days ago | parent [-] | | I'm very skeptical of this psychoanalysis of people who disagree with you. Can't people just be wrong? People are wrong all the time without it being some sort of defense mechanism. I feel this line of thinking puts you in a headspace to write off anything contradictory to your beliefs. You could easily say that the AI hype is a cope as well. The tech industry and investors need there to be be a hot new technology, their career depends on it. There might be some truth to the coping in either direction but I feel you should try to ignore that and engage with the content of whatever the person is saying or we'll never make any progress. |
|
|
|
| ▲ | tim333 4 days ago | parent | prev [-] |
| I have the impression a lot depends on people's past reading and knowledge of what's going on. If you've read the likes of Kurzweil, Moravec, maybe Turing, you're probably going to treat AGI/ASI as inevitable. For people who haven't they just see these chatbots and the like and think those won't change things much. It's maybe a bit like the early days of covid when the likes of Trump were saying it's nothing, it'll be over by the spring while people who understood virology could see that a bigger thing was on the way. |
| |
| ▲ | morsecodist 3 days ago | parent [-] | | These people's theories (except Turing) are highly speculative predictions about the future. They could be right but they are not analogous to the predictions we get out of epidemiology where we have had a lot of examples to study. What they are doing is not science and it is way more reasonable to doubt them. | | |
| ▲ | tim333 3 days ago | parent [-] | | The Moravec stuff I'd say is more moderately speculative than highly. All he really said is compute power had tended to double every so long and if that keeps up we'll have human brain equivalent computer in cheap devices in the 2020s. That bit wasn't really a stretch and has largely proved true. The more unspoken speculative bit is there will then be a large economic incentive for bright researchers and companies to put a lot of effort into sorting the software side. I don't consider LLMs to do the job of general intelligence but there are a lot of people trying to figure it out. Given we have general intelligence and are the product of ~2GB of DNA, the design can't be that impossible complex, although likely a bit more than gradient descent. |
|
|