| ▲ | stingraycharles 3 hours ago | |
That really doesn’t matter a lot. The reason why it’s important for AIs to follow these rules is that it’s important for them to operate within a constrained set of rules. You can’t guarantee that programmatically, so you try to prove that it can be done empirically as a proxy. AIs can be used and abused in ways that are entirely different from humans, and that creates a liability. I think it’s going to be very difficult to categorically prevent these types of issues, unless someone is able to integrate some truly binary logic into LLM systems. Which is nearly impossible, almost by definition of what LLMs are. | ||