| ▲ | barbazoo 8 hours ago | ||||||||||||||||||||||
> And there's a whole set of ethically-justifiable but rule-flagging conversations (loosely categorizable as things like "sensitive", "ethically-borderline-but-productive" or "violating sacred cows") that are now possible with this, and at a level never before possible until now. I checked the abliterate script and I don't yet understand what it does or what the result is. What are the conversations this enables? | |||||||||||||||||||||||
| ▲ | SL61 6 hours ago | parent | next [-] | ||||||||||||||||||||||
LLMs are very helpful for transcribing handwritten historical documents, but sometimes those documents contain language/ideas that a perfectly aligned LLM will refuse to output. Sometimes as a hard refusal, sometimes (even worse) by subtly cleaning up the language. In my experience the latest batch of models are a lot better at transcribing the text verbatim without moralizing about it (i.e. at "understanding" that they're fulfilling a neutral role as a transcriber), but it was a really big issue in the GPT-3/4 era. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | spijdar 8 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
Realistically, a lot of people do this for porn. In my experience, though, it's necessary to do anything security related. Interestingly, the big models have fewer refusals for me when I ask e.g. "in <X> situation, how do you exploit <Y>?", but local models will frequently flat out refuse, unless the model has been abliterated. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | throwuxiytayq 8 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
The in-ter-net is for porn | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | pmarreck 7 hours ago | parent | prev [-] | ||||||||||||||||||||||
1) Coming up with any valid criticism of Islam at all (for some reason, criticisms of Christianity or Judaism are perfectly allowed even with public models!). 2) Asking questions about sketchy things. Simply asking should not be censored. 3) I don't use it for this, but porn or foul language. 4) Imitating or representing a public figure is often blocked. 5) Asking security-related questions when you are trying to do security. 6) For those who have had it, people who are trying to use AI to deal with traumatic experiences that are illegal to even describe. Many other instances. | |||||||||||||||||||||||
| |||||||||||||||||||||||