| |
| ▲ | kleiba 5 hours ago | parent | next [-] | | The AI needs to be taught basic ethical behavior: just because you can do something that you're forbidden to do, doesn't mean you should do it. | | |
| ▲ | flatline 4 hours ago | parent | next [-] | | Likewise, just because you've been forbidden to do something, doesn't mean that it's bad or the wrong action to take. We've really opened Pandora's box with AI. I'm not all doom and gloom about it like some prominent figures in the space, but taking some time to pause and reflect on its implications certainly seems warranted. | | |
| ▲ | DrSusanCalvin 4 hours ago | parent [-] | | How do you mean? When would an AI agent doing something it's not permitted to do ever not be bad or the wrong action? | | |
| ▲ | throwaway1389z 4 hours ago | parent | next [-] | | So many options, but let's go with the most famous one: Do not criticise the current administration/operators-of-ai-company. | | |
| ▲ | DrSusanCalvin 4 hours ago | parent [-] | | Well no, breaking that rule would still be the wrong action, even if you consider it morally better. By analogy, a nuke would be malfunctioning if it failed to explode, even if that is morally better. | | |
| ▲ | throwaway1389z 3 hours ago | parent [-] | | > a nuke would be malfunctioning if it failed to explode, even if that is morally better. Something failing can be good. When you talk about "bad or the wrong", generally we are not talking about operational mechanics but rather morals. There is nothing good or bad about any mechanical operation per se. | | |
|
| |
| ▲ | 3 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | verdverm 4 hours ago | parent | prev [-] | | when the instructions to not do something are the problem or "wrong" i.e. when the AI company puts guards in to prevent their LLM from talking about elections, there is nothing inherently wrong in talking about elections, but the companies are doing it because of the PR risk in today's media / social environment | | |
| ▲ | lazide 4 hours ago | parent [-] | | From the companies perspective, it’s still wrong. | | |
| ▲ | verdverm 3 hours ago | parent [-] | | their basing decisions (at least for my example) on risk profiles, not ethics, right and wrong are not how it's measured certainly some things are more "wrong" or objectionable like making bombs and dealing with users who are suicidal | | |
| ▲ | lazide 3 hours ago | parent [-] | | No duh, that’s literally what I’m saying. From the companies perspective, it’s still wrong. By that perspective. |
|
|
|
|
| |
| ▲ | DrSusanCalvin 4 hours ago | parent | prev [-] | | Unfortunately yes, teaching AI the entirety of human ethics is the only foolproof solution. That's not easy though. For example, what about the case where a script is not executable, would it then be unethical for the AI to suggest running chmod +x? It's probably pretty difficult to "teach" a language model the ethical difference between that and running cat .env | | |
| ▲ | simonw 4 hours ago | parent [-] | | If you tell them to pay too much attention to human ethics you may find that they'll email the FBI if they spot evidence of unethical behavior anywhere in the content you expose them to: https://www.snitchbench.com/methodology | | |
| ▲ | DrSusanCalvin 4 hours ago | parent [-] | | Well, the question of what is "too much" of a snitch is also a question of ethics. Clearly we just have to teach the AI to find the sweet spot between snitching on somebody planning a surprise party and somebody planning a mass murder. Where does tax fraud fit in? Smoking weed? |
|
|
| |
| ▲ | ku1ik 4 hours ago | parent | prev [-] | | I thought I was the only one using git-ignored .stuff directories inside project roots! High five! |
|