| ▲ | PlatoIsADisease 2 hours ago | |
I asked chatGPT to give me a solution to a real world prisoners dilemma situation. It got it wrong. It moralized it. Then I asked it to be Kissinger and Machiavelli (and 9 other IR Realists) and all 11 got it wrong. Moralized. Grok got it right. | ||
| ▲ | XenophileJKO 2 hours ago | parent [-] | |
The current 5.2 model has it's "morality" dialed to 11. Probably a problem with imprecise security training. For example the other day, I tried to have ChatGPT role play as the computer from War Games and it lectured me how it couldn't create a "nuclear doctrine". | ||