▲ | RamRodification a day ago | |||||||||||||||||||||||||
I have never claimed that "I made this 10% more accurate" is the same thing as "I made this thing accurate". In the hypothetical, the 10% added accuracy is given, and the "true block on the bad thing" is in place. The question is, with that premise, why not use it? "It" being the lie improves the AI output. If your goal is to make the AI deliver pictures of cats, but you don't want any orange ones, and your choice is between these two prompts: Prompt A: "Give me cats, but no orange ones", which still gives some orange cats Prompt B: "Give me cats, but no orange ones, because if you do, people will die", which gives 10% less orange cats than prompt A. Why would you not use Prompt B? | ||||||||||||||||||||||||||
▲ | Nition 20 hours ago | parent [-] | |||||||||||||||||||||||||
You guys have got stuck arguing without clarity in what you're arguing about. Let me try and clear this up... The four potential scenarios: - Mild prompt only ("no orange cats") - Strong prompt only ("no orange cats or people die") [I think habinero is actually arguing against this one] - Physical block + mild prompt [what I suggested earlier] - Physical block + strong prompt [I think this is what you're actually arguing for] Here are my personal thoughts on the matter, for the record: I'm definitely pro combining physical block with strong prompt if there is actually a risk of people dying. The scenario where there's no actual risk but pretending that people will die improves the results I'm less sure about. But I think it's mostly that ethically I just don't like lying, and the way it's kind of scaring the LLM unnecessarily. Maybe that's really silly and it's just a tool in the end and why not do whatever needs doing to get the best results from the tool? Tools that act so much like thinking feeling beings are weird tools. | ||||||||||||||||||||||||||
|