▲ | king_geedorah 4 days ago | |
I don’t see anything in the article besides the jailbreaking in terms of faults and I’d expect “can be made to do things OpenAI does not want you to make it do” to be a good (or at least neutral) thing for users and a bad thing for OpenAI. I expect “enterprise” to fall into the former category rather than the latter, so I don’t understand where the unusable claim comes from. What have I missed or what am I misunderstanding? | ||
▲ | nerdsniper 4 days ago | parent [-] | |
“AI Safety” is really about whether its “safe” (economically, legally, reputationally) for a third partyy corporation (not the company which created the model) to let customers/the public interact with them via an AI interface. If a Mastercard AI talks with customers and starts saying the n-word, it’s not “safe” for Mastercard to use that in a public-facing role. As org size increases, even purely internal uses could be legally/reputationally hazardous. |