▲ | lotyrin 5 days ago | |||||||||||||||||||||||||||||||
The projection and optimism people are willing to do is incredible. The fallout on reddit in the wake of the push for people to adopt 5 and how the vibe isn't as nice and it makes it harder to use it as a therapist or girlfriend or whatever, for instance is incredible. And from what I've heard of internal sentiment from OpenAI about how they have concerns about usage patterns, that was a VERY intentional effect. Many people trust the quality of the output way too much and it seems addictive to people (some kind of dopamine hit from deferring the need to think for yourself or something) such that if I suggest things in my professional context like not wholesale putting it in charge of communications with customers without including evaluations or audits or humans in the loop it's as if I told them they can't go for their smoke break and their baby is ugly. And that's not to go into things like "awakened" AI or the AI "enlightenment" cults that are forming. | ||||||||||||||||||||||||||||||||
▲ | leodiceaa 5 days ago | parent [-] | |||||||||||||||||||||||||||||||
> use it as a therapist or girlfriend or whatever > it seems addictive to people (some kind of dopamine hit from deferring the need to think for yourself or something) I think this whole thing has more to do with validation. Rigorous reasoning is hard. People found a validation machine and it released them from the need to be rigorous. These people are not "having therapy", "developing relationships", they are fascinated by a validation engine. Hence the repositories full of woo woo physics as well, and why so many people want to believe there's something more there. The usage of LLMs at work, in government, policing, coding, etc is so concerning because of that. They will validate whatever poor reasoning people throw at them. | ||||||||||||||||||||||||||||||||
|