▲ | Nition 20 hours ago | ||||||||||||||||||||||
You guys have got stuck arguing without clarity in what you're arguing about. Let me try and clear this up... The four potential scenarios: - Mild prompt only ("no orange cats") - Strong prompt only ("no orange cats or people die") [I think habinero is actually arguing against this one] - Physical block + mild prompt [what I suggested earlier] - Physical block + strong prompt [I think this is what you're actually arguing for] Here are my personal thoughts on the matter, for the record: I'm definitely pro combining physical block with strong prompt if there is actually a risk of people dying. The scenario where there's no actual risk but pretending that people will die improves the results I'm less sure about. But I think it's mostly that ethically I just don't like lying, and the way it's kind of scaring the LLM unnecessarily. Maybe that's really silly and it's just a tool in the end and why not do whatever needs doing to get the best results from the tool? Tools that act so much like thinking feeling beings are weird tools. | |||||||||||||||||||||||
▲ | habinero 19 hours ago | parent [-] | ||||||||||||||||||||||
It's just a pile of statistics. It isn't acting like a feeling thing, and telling it "do this or people will die" doesn't actually do anything. It feels like it does, but only because humans are really good about fooling ourselves into seeing patterns where there are none. Saying this kind of prompt changes anything is like saying the horse Clever Hans really could do math. It doesn't, he couldn't. It's incredibly silly to think you can make the non-deterministic system less non-deterministic by chanting the right incantation at it. It's like y'all want to be fooled by the statistical model. Has nobody ever heard of pareidolia? Why would you not start with the null hypothesis? I don't get it lol. | |||||||||||||||||||||||
|