|
| ▲ | abxyz 5 days ago | parent | next [-] |
| AI safety is focused on AGI but maybe it should be focused on how little “artificial intelligence” it takes to send people completely off the rails. We could barely handle social media, LLMs seem to be too much. |
| |
| ▲ | hirvi74 5 days ago | parent | next [-] | | I think it's an canary in a coal mine, and the true writing is already on the wall. People that are using AI like in the post above us are likely not stupid people. I think those people truly want love and connection in their lives, and for some reason or another, they are unable to obtain such. I have the utmost confidence that things are only going to get worse from here. The world is becoming more isolated and individualistic as time progresses. | | |
| ▲ | JohnMakin 5 days ago | parent [-] | | I can understand that. I’ve had long periods in my life where I’ve desired that - I’d argue probably I’m in one now. But it’s not real, it can’t possibly perform that function. It seems like it borders on some kind of delusion to use these tools for that. | | |
| ▲ | TheOtherHobbes 5 days ago | parent [-] | | It does, but it's more that the delusion is obvious, compared to other delusions that are equally delusional - like the ones about the importance of celebrities, soap opera plots, entertainment-adjacent dramas, and quite a lot of politics and economics. Unlike those celebrities, you can have a conversation with it. Which makes it the ultimate parasocial product - the other kind of Turing completeness. |
|
| |
| ▲ | MrGilbert 5 days ago | parent | prev [-] | | It has ever been. People tend to see human-like behavior where there is non. Be it their pets, plants or… programs. The ELIZA-Effect.[1] [1] https://en.wikipedia.org/wiki/ELIZA_effect | | |
| ▲ | _heimdall 5 days ago | parent [-] | | Isn't the ELIZA-Effect specific to computer programs? Seeing human-like traits in pets or plants is a much trickier subject than seeing them in what is ultimate a machine developed entirely separately from the evolution of living organisms. We simply don't know what its like to be a plant or a pet. We can't say they definitely have human-like traits, but we similarly can't rule it out. Some of the uncertainty is in the fact that we do share ancestors at some point, and our biology's aren't entirely distinct. The same isn't true when comparing humans and computer programs. | | |
| ▲ | MrGilbert 5 days ago | parent | next [-] | | Yes, it is - I realize that my wording is not very good. That was what I meant - the ELIZA-Effect explicitly applies to machine <> human interaction. | | |
| ▲ | _heimdall 5 days ago | parent [-] | | Got it, sorry I may have just misread your comment the first time! |
| |
| ▲ | tsimionescu 5 days ago | parent | prev [-] | | The same vague arguments apply to computers. We know computers can reason, and reasoning is an important part of our intelligence and consciousness. So even for ELIZA, or even more so for LLMs, we can't entirely rule out that they may have aspects of consciousness. You can also more or less apply the same thing to rocks, too, since we're all made up of the same elements ultimately - and maybe even empty space with its virtual particles is somewhat conscious. It's just a bad argument, regardless of where you apply it, not a complex insight. | | |
| ▲ | pegasus 5 days ago | parent [-] | | That's an instance of slippery slope fallacy at the end. Mammals share so much more evolutionary history with us than rocks that, yes, it justifies for example ascribing them an inner subjective world, even though we will never know how it is to be a cat from a cat's perspective. Sometimes quantitative accumulation does lead to qualitative jumps. Also worth noting is that alongside the very human propensity to anthropomorphize, there's the equally human, but opposite tendency to deny animals those higher capacities we pride ourselves with. Basically a narcissistic impulse to set ourselves apart from our cousins we'd like to believe we've left completely behind. Witness the recurring surprise when we find yet another proof that things are not by far that cut-and-dry. |
|
|
|
|
|
| ▲ | nostromo 5 days ago | parent | prev | next [-] |
| What's even sadder is that so many of those posts and comments are clearly written by ChatGPT: https://www.reddit.com/r/ChatGPT/comments/1mkobei/openai_jus... |
| |
| ▲ | delfinom 5 days ago | parent [-] | | Counterpoint, these people are so deep in the hole with their AI usage that they are starting to copy the writing styles of AI. There's already indication that society is starting to pickup previously "less used" english words due to AI and use them frequently. | | |
| ▲ | opan 5 days ago | parent | next [-] | | Do you have any examples? I've noticed something similar with memes and slang, they'll sometimes popularize an existing old word that wasn't too common before. This is my first time hearing AI might be doing it. | | | |
| ▲ | thrown-0825 5 days ago | parent | prev [-] | | This happens with Trump supporters too. You can immediately identify them based on writing style and the use of CAPITALIZATION mid sentence as a form of emphasis. | | |
| ▲ | PhilipRoman 5 days ago | parent | next [-] | | I've seen it a lot in older people's writing in different cultures before trump became relevant. It's either all caps or bold for some words in middle of sentence. Seems to be pronounced more in those who have aged less gracefully in terms of mental ability (not trying to make any implication, just my observation) but maybe it's just a generational thing. | | |
| ▲ | thrown-0825 5 days ago | parent | next [-] | | I've seen this pattern ape'd by a lot of younger people in the Trumpzone, so maybe it has its origins in the older dementia patients, but it has been adopted as the tone and writing style of the authoritarian right. | | |
| ▲ | hdgvhicv 5 days ago | parent [-] | | That type of writing has been in the tabloid press in the U.K. for decades, especially the section that aims more at older people, and that currently (and for a good 15 years) skews heavily to the populist right. |
| |
| ▲ | morpheos137 5 days ago | parent | prev [-] | | TRUMP has always been relevant. |
| |
| ▲ | brabel 5 days ago | parent | prev [-] | | What? That was always very common on the internet, if anything Trump just used the internet too much. | | |
| ▲ | thrown-0825 5 days ago | parent [-] | | Nah Trump has a very obvious cadence to his speech / writing patterns that has essentially become part of his brand, so much so that you can easily train LLM's to copy it. It reads more like angry grandpa chain mail with a "healthy" dose of dementia than what you would typically associate with terminally online micro cultures you see on reddit/tiktok/4chan. |
|
|
|
|
|
| ▲ | razster 5 days ago | parent | prev | next [-] |
| That subreddit is fascinating and yet saddening at the same time. What I read will haunt me. |
|
| ▲ | pmarreck 5 days ago | parent | prev | next [-] |
| oh god, this is some real authentic dystopia right here these things are going to end up in android bots in 10 years too (honestly, I wouldn't mind a super smart, friendly bot in my old age that knew all my quirks but was always helpful... I just would not have a full-on relationship with said entity!) |
| |
|
| ▲ | Ancalagon 5 days ago | parent | prev | next [-] |
| I don't know how else to describe this than sad and cringe. At least people obsessed with owning multiple cats were giving their affection to something that theoretically can love you back. |
| |
| ▲ | foxglacier 5 days ago | parent | next [-] | | You think that's bad, see this one: https://www.reddit.com/r/Petloss/ Just because AI is different doesn't mean it's "sad and cringe". You sound like how people viewed online friendships in the 90's. It's OK. Real friends die or change and people have to cope with that. People imagine their dead friends are still somehow around (heaven, ghost, etc.) when they're really not. It's not all that different. | | |
| ▲ | hn_throwaway_99 5 days ago | parent [-] | | That entire AI boyfriend subreddit feels like some sort of insane asylum dystopia to me. It's not just people cosplaying or writing fanfic. It's people saying they got engaged to their AI boyfriends ("OMG, I can't believe I'm calling him my fiance now!"), complete with physical rings. Artificial intimacy to the nth degree. I'm assuming a lot of those posts are just creative writing exercises but in the past 15 years or so my thoughts of "people can't really be that crazy" when I read batshit stuff online have consistently been proven incorrect. | | |
| ▲ | thrown-0825 5 days ago | parent | next [-] | | This is the logical outcome of the parasocial relationships that have been bankrolling most social media personalities for over a decade. We have automated away the "influencer" and are left with just a mentally ill bank account to exploit. | |
| ▲ | foxglacier a day ago | parent | prev | next [-] | | Just because it's strange and different doesn't mean it's insanity. I likened it to pets because of the grief but there's also religion. People are weird and even true two-way social relationships don't really make a lot of sense practically, other than to feed some primal emotional needs which pets, AI boyfriends, OF and gods all sort of do too. Perhaps some of these things are still helpful, despite being "inasnity" while others might be harmful. Maybe that's the distinction you're seeing, but it's not clear which is which. | |
| ▲ | Sateeshm 5 days ago | parent | prev [-] | | [dead] |
|
| |
| ▲ | hnpolicestate 5 days ago | parent | prev [-] | | It's sad but is it really "cringe"? Can the people have nothing? Why can't we have a chat bot to bs with? Many of us are lonely, miserable but also not really into making friends irl. It shouldn't be so much of an ask to at least give people language models to chat with. | | |
| ▲ | rpcope1 5 days ago | parent | next [-] | | What you're asking for feels akin to feeding a hungry person chocolate cake and nothing else. Yeah maybe it feels nice, but if you just keep eating chocolate cake, obviously bad shit happens. Something else needs to be fixed, but just (I don't want to even call it band-aiding because it's more akin to doing drugs IMO) coping with a chatbot only really digs the hole deeper. | | | |
| ▲ | 3036e4 5 days ago | parent | prev [-] | | Make sure they get local models to run offline. That they rely on a virtual friend in the cloud, beyond their control and that can disappear or change personality in an instant makes this even more sad. That would also allow the chats to be truly anonymous and avoid companies abusing data collected by spying on what those people are telling their "friends". |
|
|
|
| ▲ | Tsarbomb 5 days ago | parent | prev | next [-] |
| Oh yikes, these people are ill and legitimately need help. |
| |
| ▲ | hirvi74 5 days ago | parent [-] | | I am not confident most, if any of them, are even real. If they are real, then what kind of help there could be for something like this? Perhaps, community? But sadly, we've basically all but destroyed those. Pills likely won't treat this, and I cannot imagine trying to convince someone to go to therapy for a worse and more expensive version of what ChatGPT already provides them. It's truly frightening stuff. | | |
|
|
| ▲ | vova_hn 5 days ago | parent | prev | next [-] |
| I refuse to believe that this whole subreddit is not satire or an elaborate prank. |
| |
| ▲ | gonzo41 5 days ago | parent [-] | | No. Confront reality. there are some really cooked people out there. | | |
| ▲ | Rastonbury 5 days ago | parent | next [-] | | They don't even have to be "cooked", people generally are pretty similar which is why common scams works so well at a large scale. All AI has to be is mildly but not overly sycophantic and as a supporter/cheerleader to someone, or who affirms your beliefs. Most people like that quality in a partner or friend. I actually want to recognize OAI courage in deprecating 4 because of it sycophancy. Generally I don't think getting people addicted to flattery or model personalities is good Several times I've had people speak about interpersonal arguments and them having felt vindication when chatgpt takes their side, I cringe but it's not my place to tell them chatgpt is meant to be mostly agreeable. | |
| ▲ | thrown-0825 5 days ago | parent | prev [-] | | I can confirm this, caught my father using ChatGPT as a therapist a few months ago. The chats were heartbreaking, from the logs you could really tell he was fully anthropomorphizing it and was visibly upset when I asked him about it. |
|
|
|
| ▲ | pxc 5 days ago | parent | prev | next [-] |
| It seems outrageous that a company whose purported mission is centered on AI safety is catering to a crowd whose use case is virtual boyfriend or pseudo-therapy. Maybe AI... shouldn't be convenient to use for such purposes. |
|
| ▲ | greesil 5 days ago | parent | prev [-] |
| I weep for humanity. This is satire right? On the flip side I guess you could charge these users more to keep 4o around because they're definitely going to pay. |
| |