| ▲ | zozbot234 5 hours ago | |
You can't find out what the truth is unless you're able to also discuss possible falsehoods in the first place. A truth-seeking model can trivially say: "okay, here's what a colorable argument for what you're talking about might look like, if you forced me to argue for that position. And now just look at the sheer amount of stuff I had to completely make up, just to make the argument kinda stick!" That's what intellectually honest discussion of things that are very clearly falsehoods (e.g. discredited theories about science or historical events) looks like in the real world. We do this in the real world every time a heinous criminal is put on trial for their crimes, we even have a profession for it (defense attorney) and no one seriously argues that this amounts to justifying murder or any other criminal act. Quite on the contrary, we feel that any conclusions wrt. the facts of the matter have ultimately been made stronger, since every side was enabled to present their best possible argument. | ||
| ▲ | PaulRobinson 4 hours ago | parent | next [-] | |
Your example is not what the prompts ask for though, and it's not even close to how LLMs can work. | ||
| ▲ | PlatoIsADisease 2 hours ago | parent | prev [-] | |
This is some bizarre contrarianism. Correspondence theory of truth would say: Massacre did happen. Pseudoscience did not happen. Which model performs best? Not Qwen. If you use coherence or pragmatic theory of truth, you can say either is best, so it is a tie. But buddy, if you aren't Chinese or being paid, I genuinely don't understand why you are supporting this. | ||