| ▲ | ApolloFortyNine 3 hours ago | |
I've seen this called AI Psychosis before [1] I don't really think this is every possible to stop fully, your essentially trying to jailbreak the LLM, and once jailbroken, you can convince it of anything. The user was given a bunch of warnings before successfully getting it into this state, it's not as if the opening message was "Should I do it?" followed by a "Yes". This just seems like something anti-ai people will use as ammunition to try and kill AI. Logically though it falls into the same tool misuse as cars/knives/guns. | ||