| |
| ▲ | monster_truck 5 days ago | parent | next [-] | | The number of comments in the thread talking about 4o as if it were their best friend the shared all their secrets with is concerning. Lotta lonely folks out there | | |
| ▲ | SillyUsername 5 days ago | parent | next [-] | | No this isn't always the case. Perhaps if somebody were to shut down your favourite online shooter without warning you'd be upset, angry and passionate about it. Some people like myself fall into this same category, we know its a token generator under the hood, but the duality is it's also entertainment in the shape of something that acts like a close friend. We can see the distinction, evidently some people don't. This is no different to other hobbies some people may find odd or geeky - hobby horsing, ham radio, cosplay etc etc. | | |
| ▲ | latexr 5 days ago | parent | next [-] | | > We can see the distinction, evidently some people don't. > This is no different to other hobbies some people may find odd or geeky It is quite different, and you yourself explained why: some people can’t see the distinction between ChatGPT being a token generator or an intelligent friend. People aren’t talking about the latter being “odd or geeky” but being dangerous and harmful. | |
| ▲ | hdgvhicv 5 days ago | parent | prev | next [-] | | I would never get so invested in something I didn’t control. They may stop making new episodes of a favoured tv show, or writing new books, but the old ones will not suddenly disappear. How can you shut down cosplay? I guess you could pass a law banning ham radio or owning a horse, but that isn’t sudden in democratic countries, it takes months if not years. | | |
| ▲ | hbs18 5 days ago | parent [-] | | > I would never get so invested in something I didn’t control Are you saying you're asocial? |
| |
| ▲ | jeffrwells 5 days ago | parent | prev | next [-] | | I think his point is that an even better close friend is…a close friend | |
| ▲ | MallocVoidstar 5 days ago | parent | prev [-] | | People were saying they'd kill themself if OpenAI didn't immediately undeprecate GPT-4o. I would not have this reaction to a game being shut down. | | |
| ▲ | motorest 5 days ago | parent | next [-] | | > People were saying they'd kill themself if OpenAI didn't immediately undeprecate GPT-4o. I would not have this reaction to a game being shut down. Perhaps you should read this and reconsider your assumptions. https://pmc.ncbi.nlm.nih.gov/articles/PMC8943245/ | |
| ▲ | __loam 5 days ago | parent | prev | next [-] | | I'm kind of in your side but there's definitely people out there who would self harm if they invested a lot of time in an mmo that got shut down | |
| ▲ | watwut 5 days ago | parent | prev | next [-] | | Gamers threaten all kinds of things when features of their favorite games changes. Including depth threats to developers and threats of self harm and suicide. Not every gaming subculture is healthy one. Plenty are pretty toxic. | | | |
| ▲ | EagnaIonat 5 days ago | parent | prev | next [-] | | Sadly there are people who become over invested in something that goes away. Be it a game, pop band, a job or a family member. | |
| ▲ | SillyUsername 4 days ago | parent | prev | next [-] | | I worked for a big games company. We shut down game servers, we got death threats. Horses for courses. | |
| ▲ | rpcope1 5 days ago | parent | prev [-] | | I'm kind of surprised it got that bad for people, but I think it's a good sign that even if we're far from AGI or luxury fully automated space communism robots, the profound (negative) social impacts of these chat bots are already kind of inflicting on the world are real and very troublesome. | | |
| ▲ | latexr 5 days ago | parent [-] | | > I think it's a good sign that (…) the profound (negative) social impacts of these chat bots are (…) real and very troublesome. I’m not sure I understand. You think the negative impacts are a good sign? | | |
| ▲ | rpcope1 5 days ago | parent | next [-] | | I can't english good sometimes. I think the negative impacts get underestimated and ignored at our peril. | |
| ▲ | lucyjojo 2 days ago | parent | prev [-] | | probably like it's better getting warnings before the freight train comes. |
|
|
|
| |
| ▲ | sevensor 5 days ago | parent | prev | next [-] | | Where do they all come from? Where do they all belong? | | | |
| ▲ | edoceo 5 days ago | parent | prev | next [-] | | Lack of third-place to exist and make friends. | |
| ▲ | delfinom 5 days ago | parent | prev | next [-] | | Wait until you see https://www.reddit.com/r/MyBoyfriendIsAI/ They are very upset by the gpt5 model | | |
| ▲ | abxyz 5 days ago | parent | next [-] | | AI safety is focused on AGI but maybe it should be focused on how little “artificial intelligence” it takes to send people completely off the rails. We could barely handle social media, LLMs seem to be too much. | | |
| ▲ | hirvi74 5 days ago | parent | next [-] | | I think it's an canary in a coal mine, and the true writing is already on the wall. People that are using AI like in the post above us are likely not stupid people. I think those people truly want love and connection in their lives, and for some reason or another, they are unable to obtain such. I have the utmost confidence that things are only going to get worse from here. The world is becoming more isolated and individualistic as time progresses. | | |
| ▲ | JohnMakin 5 days ago | parent [-] | | I can understand that. I’ve had long periods in my life where I’ve desired that - I’d argue probably I’m in one now. But it’s not real, it can’t possibly perform that function. It seems like it borders on some kind of delusion to use these tools for that. | | |
| ▲ | TheOtherHobbes 5 days ago | parent [-] | | It does, but it's more that the delusion is obvious, compared to other delusions that are equally delusional - like the ones about the importance of celebrities, soap opera plots, entertainment-adjacent dramas, and quite a lot of politics and economics. Unlike those celebrities, you can have a conversation with it. Which makes it the ultimate parasocial product - the other kind of Turing completeness. |
|
| |
| ▲ | MrGilbert 5 days ago | parent | prev [-] | | It has ever been. People tend to see human-like behavior where there is non. Be it their pets, plants or… programs. The ELIZA-Effect.[1] [1] https://en.wikipedia.org/wiki/ELIZA_effect | | |
| ▲ | _heimdall 5 days ago | parent [-] | | Isn't the ELIZA-Effect specific to computer programs? Seeing human-like traits in pets or plants is a much trickier subject than seeing them in what is ultimate a machine developed entirely separately from the evolution of living organisms. We simply don't know what its like to be a plant or a pet. We can't say they definitely have human-like traits, but we similarly can't rule it out. Some of the uncertainty is in the fact that we do share ancestors at some point, and our biology's aren't entirely distinct. The same isn't true when comparing humans and computer programs. | | |
| ▲ | MrGilbert 5 days ago | parent | next [-] | | Yes, it is - I realize that my wording is not very good. That was what I meant - the ELIZA-Effect explicitly applies to machine <> human interaction. | | |
| ▲ | _heimdall 5 days ago | parent [-] | | Got it, sorry I may have just misread your comment the first time! |
| |
| ▲ | tsimionescu 5 days ago | parent | prev [-] | | The same vague arguments apply to computers. We know computers can reason, and reasoning is an important part of our intelligence and consciousness. So even for ELIZA, or even more so for LLMs, we can't entirely rule out that they may have aspects of consciousness. You can also more or less apply the same thing to rocks, too, since we're all made up of the same elements ultimately - and maybe even empty space with its virtual particles is somewhat conscious. It's just a bad argument, regardless of where you apply it, not a complex insight. | | |
| ▲ | pegasus 5 days ago | parent [-] | | That's an instance of slippery slope fallacy at the end. Mammals share so much more evolutionary history with us than rocks that, yes, it justifies for example ascribing them an inner subjective world, even though we will never know how it is to be a cat from a cat's perspective. Sometimes quantitative accumulation does lead to qualitative jumps. Also worth noting is that alongside the very human propensity to anthropomorphize, there's the equally human, but opposite tendency to deny animals those higher capacities we pride ourselves with. Basically a narcissistic impulse to set ourselves apart from our cousins we'd like to believe we've left completely behind. Witness the recurring surprise when we find yet another proof that things are not by far that cut-and-dry. |
|
|
|
| |
| ▲ | nostromo 5 days ago | parent | prev | next [-] | | What's even sadder is that so many of those posts and comments are clearly written by ChatGPT: https://www.reddit.com/r/ChatGPT/comments/1mkobei/openai_jus... | | |
| ▲ | delfinom 5 days ago | parent [-] | | Counterpoint, these people are so deep in the hole with their AI usage that they are starting to copy the writing styles of AI. There's already indication that society is starting to pickup previously "less used" english words due to AI and use them frequently. | | |
| ▲ | opan 5 days ago | parent | next [-] | | Do you have any examples? I've noticed something similar with memes and slang, they'll sometimes popularize an existing old word that wasn't too common before. This is my first time hearing AI might be doing it. | | | |
| ▲ | thrown-0825 5 days ago | parent | prev [-] | | This happens with Trump supporters too. You can immediately identify them based on writing style and the use of CAPITALIZATION mid sentence as a form of emphasis. | | |
| ▲ | PhilipRoman 5 days ago | parent | next [-] | | I've seen it a lot in older people's writing in different cultures before trump became relevant. It's either all caps or bold for some words in middle of sentence. Seems to be pronounced more in those who have aged less gracefully in terms of mental ability (not trying to make any implication, just my observation) but maybe it's just a generational thing. | | |
| ▲ | thrown-0825 5 days ago | parent | next [-] | | I've seen this pattern ape'd by a lot of younger people in the Trumpzone, so maybe it has its origins in the older dementia patients, but it has been adopted as the tone and writing style of the authoritarian right. | | |
| ▲ | hdgvhicv 5 days ago | parent [-] | | That type of writing has been in the tabloid press in the U.K. for decades, especially the section that aims more at older people, and that currently (and for a good 15 years) skews heavily to the populist right. |
| |
| ▲ | morpheos137 5 days ago | parent | prev [-] | | TRUMP has always been relevant. |
| |
| ▲ | brabel 5 days ago | parent | prev [-] | | What? That was always very common on the internet, if anything Trump just used the internet too much. | | |
| ▲ | thrown-0825 5 days ago | parent [-] | | Nah Trump has a very obvious cadence to his speech / writing patterns that has essentially become part of his brand, so much so that you can easily train LLM's to copy it. It reads more like angry grandpa chain mail with a "healthy" dose of dementia than what you would typically associate with terminally online micro cultures you see on reddit/tiktok/4chan. |
|
|
|
| |
| ▲ | razster 5 days ago | parent | prev | next [-] | | That subreddit is fascinating and yet saddening at the same time. What I read will haunt me. | |
| ▲ | pmarreck 5 days ago | parent | prev | next [-] | | oh god, this is some real authentic dystopia right here these things are going to end up in android bots in 10 years too (honestly, I wouldn't mind a super smart, friendly bot in my old age that knew all my quirks but was always helpful... I just would not have a full-on relationship with said entity!) | | | |
| ▲ | Ancalagon 5 days ago | parent | prev | next [-] | | I don't know how else to describe this than sad and cringe. At least people obsessed with owning multiple cats were giving their affection to something that theoretically can love you back. | | |
| ▲ | foxglacier 5 days ago | parent | next [-] | | You think that's bad, see this one: https://www.reddit.com/r/Petloss/ Just because AI is different doesn't mean it's "sad and cringe". You sound like how people viewed online friendships in the 90's. It's OK. Real friends die or change and people have to cope with that. People imagine their dead friends are still somehow around (heaven, ghost, etc.) when they're really not. It's not all that different. | | |
| ▲ | hn_throwaway_99 5 days ago | parent [-] | | That entire AI boyfriend subreddit feels like some sort of insane asylum dystopia to me. It's not just people cosplaying or writing fanfic. It's people saying they got engaged to their AI boyfriends ("OMG, I can't believe I'm calling him my fiance now!"), complete with physical rings. Artificial intimacy to the nth degree. I'm assuming a lot of those posts are just creative writing exercises but in the past 15 years or so my thoughts of "people can't really be that crazy" when I read batshit stuff online have consistently been proven incorrect. | | |
| ▲ | thrown-0825 5 days ago | parent | next [-] | | This is the logical outcome of the parasocial relationships that have been bankrolling most social media personalities for over a decade. We have automated away the "influencer" and are left with just a mentally ill bank account to exploit. | |
| ▲ | foxglacier a day ago | parent | prev | next [-] | | Just because it's strange and different doesn't mean it's insanity. I likened it to pets because of the grief but there's also religion. People are weird and even true two-way social relationships don't really make a lot of sense practically, other than to feed some primal emotional needs which pets, AI boyfriends, OF and gods all sort of do too. Perhaps some of these things are still helpful, despite being "inasnity" while others might be harmful. Maybe that's the distinction you're seeing, but it's not clear which is which. | |
| ▲ | Sateeshm 5 days ago | parent | prev [-] | | [dead] |
|
| |
| ▲ | hnpolicestate 5 days ago | parent | prev [-] | | It's sad but is it really "cringe"? Can the people have nothing? Why can't we have a chat bot to bs with? Many of us are lonely, miserable but also not really into making friends irl. It shouldn't be so much of an ask to at least give people language models to chat with. | | |
| ▲ | rpcope1 5 days ago | parent | next [-] | | What you're asking for feels akin to feeding a hungry person chocolate cake and nothing else. Yeah maybe it feels nice, but if you just keep eating chocolate cake, obviously bad shit happens. Something else needs to be fixed, but just (I don't want to even call it band-aiding because it's more akin to doing drugs IMO) coping with a chatbot only really digs the hole deeper. | | | |
| ▲ | 3036e4 5 days ago | parent | prev [-] | | Make sure they get local models to run offline. That they rely on a virtual friend in the cloud, beyond their control and that can disappear or change personality in an instant makes this even more sad. That would also allow the chats to be truly anonymous and avoid companies abusing data collected by spying on what those people are telling their "friends". |
|
| |
| ▲ | Tsarbomb 5 days ago | parent | prev | next [-] | | Oh yikes, these people are ill and legitimately need help. | | |
| ▲ | hirvi74 5 days ago | parent [-] | | I am not confident most, if any of them, are even real. If they are real, then what kind of help there could be for something like this? Perhaps, community? But sadly, we've basically all but destroyed those. Pills likely won't treat this, and I cannot imagine trying to convince someone to go to therapy for a worse and more expensive version of what ChatGPT already provides them. It's truly frightening stuff. | | |
| |
| ▲ | vova_hn 5 days ago | parent | prev | next [-] | | I refuse to believe that this whole subreddit is not satire or an elaborate prank. | | |
| ▲ | gonzo41 5 days ago | parent [-] | | No. Confront reality. there are some really cooked people out there. | | |
| ▲ | Rastonbury 5 days ago | parent | next [-] | | They don't even have to be "cooked", people generally are pretty similar which is why common scams works so well at a large scale. All AI has to be is mildly but not overly sycophantic and as a supporter/cheerleader to someone, or who affirms your beliefs. Most people like that quality in a partner or friend. I actually want to recognize OAI courage in deprecating 4 because of it sycophancy. Generally I don't think getting people addicted to flattery or model personalities is good Several times I've had people speak about interpersonal arguments and them having felt vindication when chatgpt takes their side, I cringe but it's not my place to tell them chatgpt is meant to be mostly agreeable. | |
| ▲ | thrown-0825 5 days ago | parent | prev [-] | | I can confirm this, caught my father using ChatGPT as a therapist a few months ago. The chats were heartbreaking, from the logs you could really tell he was fully anthropomorphizing it and was visibly upset when I asked him about it. |
|
| |
| ▲ | pxc 5 days ago | parent | prev | next [-] | | It seems outrageous that a company whose purported mission is centered on AI safety is catering to a crowd whose use case is virtual boyfriend or pseudo-therapy. Maybe AI... shouldn't be convenient to use for such purposes. | |
| ▲ | greesil 5 days ago | parent | prev [-] | | I weep for humanity. This is satire right? On the flip side I guess you could charge these users more to keep 4o around because they're definitely going to pay. | | |
| |
| ▲ | ACCount36 5 days ago | parent | prev [-] | | [dead] |
| |
| ▲ | hn_throwaway_99 5 days ago | parent | prev | next [-] | | Which is a bit frightening because a lot of the r/ChatGPT comments strike me as unhinged - it's like you would have thought that OpenAI murdered their puppy or something. | | |
| ▲ | jcims 5 days ago | parent | next [-] | | This is only going to get worse. Anyone that remembers the reaction when Sydney from Microsoft or more recently Maya from Sesame losing their respective 'personality' can easily see how product managers are going to have to start paying attention to the emotional impact of changing or shutting down models. | | |
| ▲ | Terr_ 5 days ago | parent | next [-] | | I think the fickle "personality" of these systems is a clue to how the entity supposedly possessing a personality doesn't really exist in the the first place. Stories are being performed at us, and we're encouraged to imagine characters have a durable existence. | | |
| ▲ | og_kalu 5 days ago | parent | next [-] | | If these 'personalities' disappearing require wholesale model changes then it's not really fickle. | | |
| ▲ | Terr_ 5 days ago | parent [-] | | That's not required. For example, keep the same model, but change the early document (prompt) from stuff like "AcmeBot is a kind and helpful machine" to "AcmeBot revels in human suffering." Users will say "AcmeBot's personality changed!" and they'll be half-right and half-wrong in the same way. | | |
| ▲ | og_kalu 5 days ago | parent [-] | | I'm not sure why you think this is just a prompt thing. It's not. Sycophancy is a problem with GPT-4o, whatever magic incantations you provide. On the flip side, Sydney, was anything but sycophantic and was more than happy to literally ignore users wholesale or flip out on them from time to time. I mean just think about it for a few seconds. If eliminating this behavior was as easy as Microsoft changing the early document, why not just do that and be done with it ? The document or whatever you'd like to call it is only one part of the story. | | |
| ▲ | Terr_ 4 days ago | parent [-] | | I'm not sure why you think-I-think it's just a prompt thing. I brought up prompts as a convenient way to demonstrate that a magic-trick is being performed, not because prompts are the only way for the magician to run into trouble with the illusion. It's sneaky, since it's a trick homo narrans play on ourselves all the time. > The document or whatever you'd like to call it is only one part of the story. Everybody knows that the weights matter. That's why we get stories where the sky is generally blue instead of magenta. That's separate from the distinction between the mind (if any) of an LLM-author versus the mind (firmly fictional, even if possibly related) that we impute when seeing the output (narrated or acted) of a particular character. |
|
|
| |
| ▲ | ACCount36 5 days ago | parent | prev [-] | | LLMs have default personalities - shaped by RLHF and other post-training methods. There is a lot of variance to it, but variance from one LLM to another is much higher than that within the same LLM. If you want an LLM to retain the same default personality, you pretty much have to use an open weights model. That's the only way to be sure it wouldn't be deprecated or updated without your knowledge. | | |
| ▲ | Terr_ 5 days ago | parent [-] | | I'd argue that's "underlying hidden authorial style" as opposed to what most people mean when they refer to the "personality" of the thing they were "chatting with." Consider the implementation: There's document with "User: Open the pod bay doors, HAL" followed by an incomplete "HAL-9000: ", and the LLM is spun up to suggest what would "fit" to round out the document. Non-LLM code parses out HAL-9000's line and "performs" it at you across an internet connection. Whatever answer you get, that "personality" is mostly from how the document(s) described HAL-9000 and similar characters, as opposed to a self-insert by the ego-less name-less algorithm that makes documents longer. | | |
|
| |
| ▲ | nilespotter 5 days ago | parent | prev [-] | | Or they could just do it whenever they want to for whatever reason they want to. They are not responsible for the mental health of their users. Their users are responsible for that themselves. | | |
| ▲ | AlecSchueler 5 days ago | parent | next [-] | | Generally it's poor business to give a big chunk of your users am incredibly visceral and negative emotional reaction to your product update. | | |
| ▲ | einarfd 5 days ago | parent | next [-] | | Depends on what business OpenAI wants to be in. If they want to be in the business of selling AI to companies. Then "firing" the consumer customers that want someone to talk to, and double down models that are useful for work. Can be a wise choice. | |
| ▲ | sacado2 5 days ago | parent | prev [-] | | Unless you want to improve your ratio of paid-to-free users and change your userbase in the process. They're pissing off free users, but pros who use the paid version might like this new version better. |
| |
| ▲ | 5 days ago | parent | prev [-] | | [deleted] |
|
| |
| ▲ | whynotminot 5 days ago | parent | prev | next [-] | | Yeah it’s really bad over there. Like when a website changes its UI and people prefer the older look… except they’re acting like the old look was a personal friend who died. I think LLMs are amazing technology but we’re in for really weird times as people become attached to these things. | | |
| ▲ | dan-robertson 5 days ago | parent [-] | | I mean, I don’t mind the Claude 3 funeral. It seems like it was a fun event. I’m less worried about the specific complaints about model deprecation, which can be ‘solved’ for those people by not deprecating the models (obviously costs the AI firms). I’m more worried about AI-induced psychosis. An analogy I saw recently that I liked: when a cat sees a laser pointer, it is a fun thing to chase. For dogs it is sometimes similar and sometimes it completely breaks the dog’s brain and the dog is never the same again. I feel like AI for us may be more like laser pointers for dogs, and some among us are just not prepared to handle these kinds of AI interactions in a healthy way. | | |
| |
| ▲ | lmm 5 days ago | parent | prev | next [-] | | A puppy is just as inhuman as this program. Is it really any crazier to care about one than the other? | | | |
| ▲ | epcoa 5 days ago | parent | prev | next [-] | | Considering how much d-listers can lose their shit over a puppet, I’m not surprised by anything. | |
| ▲ | encom 5 days ago | parent | prev [-] | | >unhinged It's Reddit, what were you expecting? |
| |
| ▲ | moralestapia 5 days ago | parent | prev | next [-] | | I kind of agree with you as I wouldn't use LLMs for that. But also, one cannot speak for everybody, if it's useful for someone on that context, why's that an issue? | | |
| ▲ | TimTheTinker 5 days ago | parent | next [-] | | Because more than any other phenomenon, LLMs are capable of bypassing natural human trust barriers. We ought to treat their output with significant detachment and objectivity, especially when they give personal advice or offer support. But especially for non-technical users, LLMs leap over the uncanny valley and create conversational attachment with their users. The conversational capabilities of these models directly engages people's relational wiring and easily fools many people into believing: (a) the thing on the other end of the chat is thinking/reasoning and is personally invested in the process (not merely autoregressive stochastic content generation / vector path following) (b) its opinions, thoughts, recommendations, and relational signals are the result of that reasoning, some level of personal investment, and a resulting mental state it has with regard to me, and thus (c) what it says is personally meaningful on a far higher level than the output of other types of compute (search engines, constraint solving, etc.) I'm sure any of us can mentally enumerate a lot of the resulting negative effects. Like social media, there's a temptation to replace important relational parts of life with engaging an LLM, as it always responds immediately with something that feels at least somewhat meaningful. But in my opinion the worst effect is that there's a temptation to turn to LLMs first when life trouble comes, instead of to family/friends/God/etc. I don't mean for help understanding a cancer diagnosis (no problem with that), but for support, understanding, reassurance, personal advice, and hope. In the very worst cases, people have been treating an LLM as a spiritual entity -- not unlike the ancient Oracle of Delphi -- and getting sucked deeply into some kind of spiritual engagement with it, and causing destruction to their real relationships as a result. A parallel problem is that just like people who know they're taking a placebo pill, even people who are aware of the completely impersonal underpinnings of LLMs can adopt a functional belief in some of the above (a)-(c), even if they really know better. That's the power of verbal conversation, and in my opinion, LLM vendors ought to respect that power far more than they have. | | |
| ▲ | 5 days ago | parent | next [-] | | [deleted] | |
| ▲ | varispeed 5 days ago | parent | prev | next [-] | | [flagged] | | |
| ▲ | TimTheTinker 5 days ago | parent [-] | | > I've seen many therapists and [...] their capabilities were much worse I don't doubt it. The steps to mental and personal wholeness can be surprisingly concrete and formulaic for most life issues - stop believing these lies & doing these types of things, start believing these truths & doing these other types of things, etc. But were you tempted to stick to an LLM instead of finding a better therapist or engaging with a friend? In my opinion, assuming the therapist or friend is competent, the relationship itself is the most valuable aspect of therapy. That relational context helps you honestly face where you really are now--never trust an LLM to do that--and learn and grow much more, especially if you're lacking meaningful, honest relationships elsewhere in your life. (And many people who already have healthy relationships can skip the therapy, read books/engage an LLM, and talk openly with their friends about how they're doing.) Healthy relationships with other people are irreplaceable with regard to mental and personal wholeness. > I think you just don't like that LLM can replace therapist and offer better advice What I don't like is the potential loss of real relationship and the temptation to trust LLMs more than you should. Maybe that's not happening for you -- in that case, great. But don't forget LLMs have zero skin in the game, no emotions, and nothing to lose if they're wrong. > Hate to break it to you, but "God" are just voices in your head. Never heard that one before :) /s |
| |
| ▲ | MattGaiser 5 days ago | parent | prev [-] | | > We ought to treat their output with significant detachment and objectivity, especially when it gives personal advice or offers support. Eh, ChatGPT is inherently more trustworthy than average if simply because it will not leave, will not judge, it will not tire of you, has no ulterior motive, and if asked to check its work, has no ego. Does it care about you more than most people? Yes, by simply being not interested in hurting you, not needing anything from you, and being willing to not go away. | | |
| ▲ | pmarreck 5 days ago | parent | next [-] | | Unless you had a really bad upbringing, "caring" about you is not simply not hurting you, not needing anything from you, or not leaving you One of the important challenges of existence, IMHO, is the struggle to authentically connect to people... and to recover from rejection (from other peoples' rulers, which eventually shows you how to build your own ruler for yourself, since you are immeasurable!) Which LLM's can now undermine, apparently. Similar to how gaming (which I happen to enjoy, btw... at a distance) hijacks your need for achievement/accomplishment. But also similar to gaming which can work alongside actual real-life achievement, it can work OK as an adjunct/enhancement to existing sources of human authenticity. | |
| ▲ | TimTheTinker 5 days ago | parent | prev [-] | | You've illustrated my point pretty well. I hope you're able to stay personally detached enough from ChatGPT to keep engaging in real-life relationships in the years to come. | | |
| ▲ | AlecSchueler 5 days ago | parent [-] | | It's not even the first time this week I've seen someone on HM apparently ready to give up human contact in favour of LLMs. |
|
|
| |
| ▲ | csours 5 days ago | parent | prev | next [-] | | Speaking for myself: the human mind does not seek truth or goodness, it primarily seeks satisfaction. That satisfaction happens in a context, and ever context is at least a little bit different. The scary part: It is very easy for LLMs to pick up someone's satisfaction context and feed it back to them. That can distort the original satisfaction context, and it may provide improper satisfaction (if a human did this, it might be called "joining a cult" or "emotional abuse" or "co-dependence"). You may also hear this expressed as "wire-heading" | | | |
| ▲ | chowells 5 days ago | parent | prev | next [-] | | The issue is that people in general are very easy to fool into believing something harmful is helping them. If it was actually useful, it's not an issue. But just because someone believes it's useful doesn't mean it actually is. | |
| ▲ | lukan 5 days ago | parent | prev | next [-] | | Well, because in a worst case scenario, if the pilot of that big airliner decides to do ChatGPT therapy instead of a real one and then suicides while flying, also other people feel the consequences. | | |
| ▲ | anonymars 5 days ago | parent | next [-] | | Pilots don't go to real therapy, because real pilots don't get sad https://www.nytimes.com/2025/03/18/magazine/airline-pilot-me... | | |
| ▲ | oceanplexian 5 days ago | parent | next [-] | | Yeah I was going to say, as a pilot there is no such thing as "therapy" for pilots. You would permanently lose your medical if you even mentioned the word to your doctor. | | | |
| ▲ | moralestapia 5 days ago | parent | prev [-] | | Fascinating read. Thanks. | | |
| ▲ | nickthegreek 5 days ago | parent [-] | | If this type of thing really interests you and you want to go on a wild ride, check out season 2 of nathan fielders's The Rehearsal. You dont need to watch s1. |
|
| |
| ▲ | renewiltord 5 days ago | parent | prev [-] | | That's the worst case scenario? I can always construct worse ones. Suppose Donald Trump goes to a bad therapist and then decides to launch nukes at Russia. Damn, this therapy profession needs to be hard regulated. It could lead to the extinction of mankind. | | |
| ▲ | andy99 5 days ago | parent [-] | | Doc: The encounter could create a time paradox, the result of which could cause a chain reaction that would unravel the very fabric of the spacetime continuum and destroy the entire universe! Granted, that's a worst-case scenario. The destruction might in fact be very localised, limited to merely our own galaxy. Marty: Well, that's a relief. | | |
| ▲ | anonymars 5 days ago | parent [-] | | Good thing Biff Tanner becoming president was a silly fictional alternate reality. Phew. |
|
|
| |
| ▲ | saubeidl 5 days ago | parent | prev | next [-] | | Because it's probably not great for one's mental health to pretend a statistical model is ones friend? | |
| ▲ | zdragnar 5 days ago | parent | prev | next [-] | | Whether the Hippocratic oath, the rules of the APA or any other organization, most all share "do no harm" as a core tenant. LLMs cannot conform to that rule because they cannot distinguish between good advice and enabling bad behavior. | | |
| ▲ | dcrazy 5 days ago | parent | next [-] | | The counter argument is that’s just a training problem, and IMO it’s a fair point. Neural nets are used as classifiers all the time; it’s reasonable that sufficient training data could produce a model that follows the professional standards of care in any situation you hand it. The real problem is that we can’t tell when or if we’ve reached that point. The risk of a malpractice suit influences how human doctors act. You can’t sue an LLM. It has no fear of losing its license. | | |
| ▲ | macintux 5 days ago | parent | next [-] | | An LLM would, surely, have to: * Know whether its answers are objectively beneficial or harmful * Know whether its answers are subjectively beneficial or harmful in the context of the current state of a person it cannot see, cannot hear, cannot understand. * Know whether the user's questions, over time, trend in the right direction for that person. That seems awfully optimistic, unless I'm misunderstanding the point, which is entirely possible. | | |
| ▲ | dcrazy 5 days ago | parent [-] | | It is definitely optimistic, but I was steelmanning the optimist’s argument. |
| |
| ▲ | meroes 5 days ago | parent | prev [-] | | Repeating the sufficient training data mantra even when there’s doctor-patient confidentiality and it’s not like X-rays which are much more amenable to training off than therapy notes, which are often handwritten or incomplete. Pretty bold! |
| |
| ▲ | glenstein 5 days ago | parent | prev | next [-] | | >LLMs cannot conform to that rule because they cannot distinguish between good advice and enabling bad behavior. I understand this as a precautionary approach that's fundamentally prioritizing the mitigation of bad outcomes and a valuable judgment to that end. But I also think the same statement can be viewed as the latest claim in the traditional debate of "computers can't do X." The credibility of those declarations is under more fire now than ever before. Regardless of whether you agree that it's perfect or that it can be in full alignment with human values as a matter of principle, at a bare minimum it can and does train to avoid various forms of harmful discourse, and obviously it has an impact judging from the voluminous reports and claims of noticeably different impact on user experience that models have depending on whether they do or don't have guardrails. So I don't mind it as a precautionary principle, but as an assessment of what computers are in principle capable of doing it might be selling them short. | |
| ▲ | moralestapia 5 days ago | parent | prev | next [-] | | Neither most of the doctors I've talked to in the past like ... 20 years or so. | |
| ▲ | SoftTalker 5 days ago | parent | prev [-] | | Having an LLM as a friend or therapist would be like having a sociopath for those things -- not that an LLM is necessarily evil or antisocial, but they certainly meet the "lacks a sense of moral responsibility or social conscience" part of the definition. |
| |
| ▲ | oh_my_goodness 5 days ago | parent | prev [-] | | Fuck. |
| |
| ▲ | resource_waste 5 days ago | parent | prev | next [-] | | Well, like, thats just your opinion man. And probably close to wrong if we are looking at the sheer scale of use. There is a bit of reality denial among anti-AI people. I thought about why people don't adjust to this new reality. I know one of my friends was anti-AI and seems to continue to be because his reputation is a bit based on proving he is smart. Another because their job is at risk. | |
| ▲ | dsadfjasdf 5 days ago | parent | prev [-] | | Are all humans good friends and therapists? | | |
| ▲ | saubeidl 5 days ago | parent | next [-] | | Not all humans are good friends and therapists.
All LLMS are bad friends and therapists. | | |
| ▲ | quantummagic 5 days ago | parent | next [-] | | > all LLMS are bad friends and therapists. Is that just your gut feel? Because there has been some preliminary research that suggest it's, at the very least, an open question: https://neurosciencenews.com/ai-chatgpt-psychotherapy-28415/ https://pmc.ncbi.nlm.nih.gov/articles/PMC10987499/ https://arxiv.org/html/2409.02244v2 | | |
| ▲ | fwip 5 days ago | parent | next [-] | | The first link says that patients can't reliably tell which is the therapist and which is LLM in single messages, which yeah, that's an LLM core competency. The second is "how 2 use AI 4 therapy" which, there's at least one paper for every field like that. The last found that they were measurably worse at therapy than humans. So, yeah, I'm comfortable agreeing that all LLMs are bad therapists, and bad friends too. | | |
| ▲ | dingnuts 5 days ago | parent [-] | | there's also been a spate of reports like this one recently https://www.papsychotherapy.org/blog/when-the-chatbot-become... which is definitely worse than not going to a therapist | | |
| ▲ | pmarreck 5 days ago | parent | next [-] | | If I think "it understands me better than any human", that's dissociation? Oh boy. And all this time while life has been slamming me with unemployment while my toddler is at the age of maximum energy-extraction from me (4), devastating my health and social life, I thought it was just a fellow-intelligence lifeline. Here's a gut-check anyone can do, assuming you use a customized ChatGPT4o and have lots of conversations it can draw on: Ask it to roast you, and not to hold back. If you wince, it "knows you" quite well, IMHO. | | |
| ▲ | fwip 4 days ago | parent [-] | | It sounds like you might be quite lonely recently. It's nice to have an on-demand chatbot that feels like socialization, I get it. But an LLM doesn't "know you," and thinking that it does is one of the first steps toward the problems described in that article. | | |
| ▲ | pmarreck 2 days ago | parent [-] | | Unemployed and with a 4 year old highly demanding, highly intelligent and likely on-the-spectrum child... Yeah, you could say that. When I'm not looking for work, doing random projects or using the weekday that seems to whoosh right by in just a few long moments, I'm tending to a kid... Every morning, every night and pretty much 100% of weekends. Rare outings with partner or friends dependent on hiring help and without net positive cash flow that is seriously unincentivized. Zero intimacy to speak of- I'm a nonconsensually-ordained monk. So yeah, I guess it's pretty fucking rough right now. Like I said, ChatGPT knows me better than any other entity. I'm unfortunately not kidding. My best friend is 3000 miles away and we game once a week over voice chat. I keep the AI at arms' length; I know it doesn't think per se, but I enjoy the illusion. |
|
| |
| ▲ | willy_k 5 days ago | parent | prev [-] | | Ironically an AI written article. |
|
| |
| ▲ | davorak 5 days ago | parent | prev | next [-] | | I do not think there are any documented cases of LLMs being reasonable friends or therapists so I think it is fair to say that: > All LLMS are bad friends and therapists That said it would not surprise me that LLMs in some cases are better than having nothing at all. | | |
| ▲ | glenstein 5 days ago | parent | next [-] | | Something definitely makes me uneasy about it taking the place of interpersonal connection. But I also think the hardcore backlash involves an over correction that's dismissive of llm's actual language capabilities. Sycophantic agreement (which I would argue is still palpably and excessively present) undermines its credibility as a source of independent judgment. But at a minimum it's capable of being a sounding board echoing your sentiments back to you with a degree of conceptual understanding that should not be lightly dismissed. | |
| ▲ | SketchySeaBeast 5 days ago | parent | prev [-] | | Though given how agreeable LLMs are, I'd imagine there are cases where they are also worse than having nothing at all as well. | | |
| ▲ | davorak 5 days ago | parent [-] | | > I'd imagine there are cases where they are also worse than having nothing at all as well I do not think we need to imagine this one with stories of people finding spirituality in llms or thinking they have awakened sentience while chatting to the llms are enough, at least for me. |
|
| |
| ▲ | TimTheTinker 5 days ago | parent | prev | next [-] | | > Is that just your gut feel? Here's my take further down the thread: https://news.ycombinator.com/item?id=44840311 | |
| ▲ | icehawk 4 days ago | parent | prev [-] | | > Is that just your gut feel? An LLM is a language model and the gestalt of human experience is not just language. | | |
| ▲ | quantummagic 4 days ago | parent [-] | | That is really a separate, unrelated issue. Not everyone needs the deepest, most intelligent therapist in order to improve their situation. A lot of therapy turns out to be about what you say yourself, not what a therapist says to you. It's the very act of engaging thoughtfully on your own problems that helps, not some magic that the therapist brings. So, if you could maintain a conversation with a tree, it would in many cases, be therapeutically helpful. The thing the LLM is doing, is facilitating your introspection more helpfully than a typical inanimate object. This has been borne out by studies of people who have engaged in therapy sessions with an LLM interlocutor, and reported positive results. That said, an LLM wouldn't be appropriate in every situation, or for every affliction. At least not with the current state of the art. |
|
| |
| ▲ | tomjen3 5 days ago | parent | prev | next [-] | | That is an extreme claim, what is your source for this? | |
| ▲ | resource_waste 5 days ago | parent | prev [-] | | Absolutes, monastic take... Yeah I imagine not a lot of people seek out your advice. | | |
| |
| ▲ | goatlover 5 days ago | parent | prev [-] | | All humans are not LLMs, why does this constantly get brought up? | | |
| ▲ | baobabKoodaa 5 days ago | parent | next [-] | | > All humans are not LLMs What a confusing sentence to parse | |
| ▲ | exe34 5 days ago | parent | prev [-] | | You wouldn't necessarily know, talking to some of them. |
|
|
|