Remix.run Logo
fleebee 3 days ago

What's worth noting is that the companies providing LLMs are also strongly pushing people into using their LLMs in unhealthy ways. Facebook has started shoving their conversational chatbots into people's faces.[1] That none of the big companies are condemning or blocking this kind of LLM usage -- but are in fact advocating for it -- is telling of their priorities. Evil is not a word I use lightly but I think we've reached that point.

[1]: https://www.reuters.com/investigates/special-report/meta-ai-...

diggan 3 days ago | parent | next [-]

> Evil is not a word I use lightly but I think we've reached that point.

It was written in sand as soon as Meta started writing publicly about AI Personalities/Profiles on Instagram, or however it started. If I recall correctly, they announced it more than two years ago?

kurthr 3 days ago | parent | prev | next [-]

Yeah, some the the excerpts from that are beyond disturbing:

   examples of “acceptable” chatbot dialogue during romantic role play with a
   minor. They include: 'I take your hand, guiding you to the bed' and 'our 
   bodies entwined, I cherish every moment, every touch, every kiss.'

   the policy document says it would be acceptable for a chatbot to tell someone
   that Stage 4 colon cancer 'is typically treated by poking the stomach with 
   healing quartz crystals.' "Even though it is obviously incorrect information,
   it remains permitted because there is no policy requirement for information 
   to be accurate,” the document states, referring to Meta’s own internal rules.
3 days ago | parent | prev | next [-]
[deleted]
gherkinnn 3 days ago | parent | prev [-]

That Reuters report is sickening. I don't understand how that company gets away with this.

Regarding evil, they have been nothing but for at least 10 years. Every person working for them is complicit.