| ▲ | antdke 6 hours ago | ||||||||||||||||
Well, imagine this was controlling a weapon. “Should I eliminate the target?” “no” “Got it! Taking aim and firing now.” | |||||||||||||||||
| ▲ | bigstrat2003 6 hours ago | parent | next [-] | ||||||||||||||||
It is completely irresponsible to give an LLM direct access to a system. That was true before and remains true now. And unfortunately, that didn't stop people before and it still won't. | |||||||||||||||||
| ▲ | nielsole 6 hours ago | parent | prev | next [-] | ||||||||||||||||
Shall I open the pod bay doors? | |||||||||||||||||
| ▲ | nvch 6 hours ago | parent | prev | next [-] | ||||||||||||||||
"Thinking: the user recognizes that it's impossible to guarantee elimination. Therefore, I can fulfill all initial requirements and proceed with striking it." | |||||||||||||||||
| ▲ | verdverm 6 hours ago | parent | prev [-] | ||||||||||||||||
That's why we keep humans in the loop. I've seen stuff like this all the time. It's not unusual thinking text, hence the lack of interestingness | |||||||||||||||||
| |||||||||||||||||