Remix.run Logo
sacul 5 days ago

Yeah, thanks for asking. My reasoning is this:

That chatbot you're interacting with is not your friend. I take it as a fact (assumption? axiom?) that it can never be your friend. A friend is a human - animals, in some sense, can be friends - who has your best interests at heart. But in fact, that chatbot "is" a megacorp whose interests certainly aren't your interests - often, their interests are at odds with your interests.

Google works hard with branding and marketing to make people feel good about using their products. But, at the end of the day, it's reasonably easy to recognize that when you use their products, you are interacting with a megacorp.

Chatbots blur that line, and there is a huge incentive for the megacorps to make me feel like I'm interacting with a safe, trusted "friend" or even mentor. But... I'm not. In the end, it will always be me interacting with Microsoft or OpenAI or Google or whoever.

There are laws, and then there is culture. The laws for AI and surveillance capitalism need to be in place, and we need lawmakers who are informed and who are advocates for the regular people who need to be protected. But we also need to shift culture around technology use. Just like social customs have come in that put guard rails around smartphone usage, we need to establish social customs around AI.

AI is a super helpful tool, but it should never be treated as a human friend. It might trick us into thinking that its a friend, but it can never be or become a friend.

fragmede 5 days ago | parent [-]

But why not? If we look past the trappings of "we hate corporations", why not treat it as a friend? Let's say you acquire a free-trade organic GPU and run an ethically trained LLM on it. Why is an expensive funny-shaped rock not allowed to become a friend when a stuffed animal can?

aduwah 5 days ago | parent | next [-]

The stuffed animal has only a sentimental value and it will not have a statistics based and geopolitically biased opinion that it shares with you and influences your decisions. If you want to see how bad a chatbot can be as a friend, see the recent case when it has driven a poor mentally vulnerable minor to suicide

mu53 5 days ago | parent | prev [-]

There is a new term, AI Psychosis.

AI chatbots are not humans, they don't have ethics, they can't be held responsible, they are the product of complex mathematics.

It really takes the bad parts from social media to the next level.