|
| ▲ | throwaway1389z 4 hours ago | parent | next [-] |
| So many options, but let's go with the most famous one: Do not criticise the current administration/operators-of-ai-company. |
| |
| ▲ | DrSusanCalvin 4 hours ago | parent [-] | | Well no, breaking that rule would still be the wrong action, even if you consider it morally better. By analogy, a nuke would be malfunctioning if it failed to explode, even if that is morally better. | | |
| ▲ | throwaway1389z 3 hours ago | parent [-] | | > a nuke would be malfunctioning if it failed to explode, even if that is morally better. Something failing can be good. When you talk about "bad or the wrong", generally we are not talking about operational mechanics but rather morals. There is nothing good or bad about any mechanical operation per se. | | |
|
|
|
| ▲ | 3 hours ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | verdverm 4 hours ago | parent | prev [-] |
| when the instructions to not do something are the problem or "wrong" i.e. when the AI company puts guards in to prevent their LLM from talking about elections, there is nothing inherently wrong in talking about elections, but the companies are doing it because of the PR risk in today's media / social environment |
| |
| ▲ | lazide 4 hours ago | parent [-] | | From the companies perspective, it’s still wrong. | | |
| ▲ | verdverm 3 hours ago | parent [-] | | their basing decisions (at least for my example) on risk profiles, not ethics, right and wrong are not how it's measured certainly some things are more "wrong" or objectionable like making bombs and dealing with users who are suicidal | | |
| ▲ | lazide 3 hours ago | parent [-] | | No duh, that’s literally what I’m saying. From the companies perspective, it’s still wrong. By that perspective. |
|
|
|