| ▲ | simianwords 3 hours ago | |||||||
The problem is understanding what is true and not true? Its a much harder problem to solve than you think. OpenAI is using this method - they over index on citation to the point where ChatGPT will almost blindly assume something is true when published in some credentialised place. The alternative is to use its own intuition to understand what is true and false. Its not super clear which option is better? | ||||||||
| ▲ | malux85 3 hours ago | parent [-] | |||||||
This isn't a discussion about finding absolute truth, which is hard because nobody has even created a univerally generalised definition of truth, let alone a way to find it; and literally everybody knows that, implicitly or explicitly. This is a discussion about how a model that is fine tuned to be polite is less true than one that is not | ||||||||
| ||||||||