▲ | fleebee 3 days ago | |
What's worth noting is that the companies providing LLMs are also strongly pushing people into using their LLMs in unhealthy ways. Facebook has started shoving their conversational chatbots into people's faces.[1] That none of the big companies are condemning or blocking this kind of LLM usage -- but are in fact advocating for it -- is telling of their priorities. Evil is not a word I use lightly but I think we've reached that point. [1]: https://www.reuters.com/investigates/special-report/meta-ai-... | ||
▲ | diggan 3 days ago | parent | next [-] | |
> Evil is not a word I use lightly but I think we've reached that point. It was written in sand as soon as Meta started writing publicly about AI Personalities/Profiles on Instagram, or however it started. If I recall correctly, they announced it more than two years ago? | ||
▲ | kurthr 3 days ago | parent | prev | next [-] | |
Yeah, some the the excerpts from that are beyond disturbing:
| ||
▲ | 3 days ago | parent | prev | next [-] | |
[deleted] | ||
▲ | gherkinnn 3 days ago | parent | prev [-] | |
That Reuters report is sickening. I don't understand how that company gets away with this. Regarding evil, they have been nothing but for at least 10 years. Every person working for them is complicit. |