▲ | tombert 4 days ago | |
I saw a video recently that talked about a chatbot "therapist" that ended up telling the patient to murder a dozen people [1]. It was mind-blowing how easy it was to get LLMs to suggest pretty disturbing stuff. | ||
▲ | larodi 4 days ago | parent [-] | |
very easy - you just download the ablated version in LM Studio or Ollama, and off you go. https://en.wikipedia.org/wiki/Ablation_(artificial_intellige... |