| ▲ | Bender 4 hours ago | |
They have no real stake in any decision they make. And they are not human. Not even a sociopath or psychopathic human. At best they might be able to estimate casualties. LLM's probably can't even reach the logic conclusion of the fictional WOPR Joshua from the movie Wargames [1]. Make LLM's win every game of tic-tac-toe and see if it reaches the same conclusion of WOPR. [1] ... Edit: (Answering my own question) From Gemini: Yes, many LLMs (GPT-4, Claude 3, Llama 3) have been tested on Tic-Tac-Toe, and they generally perform poorly, often playing at or below the level of random chance. While they can understand the rules, they struggle with spatial reasoning, often trying to place a piece in an occupied spot, forgetting to block opponents, or failing to win. If LLM's can't even figure out tic-tac-toe then surely do not give these things the ability to launch any kind of weapon. Not even rubber bands. [1] - https://www.youtube.com/watch?v=s93KC4AGKnY [video][6m][tic-tac-toe] | ||
| ▲ | sheiyei 4 hours ago | parent [-] | |
Which makes them so great for making difficult (often bad) decisions – it wasn't me, it was the "objective" and "neutral" "superintelligence" which I totally didn't give a suggestive prompt. | ||