| ▲ | keithnz 5 hours ago | |
tell them what to prompt the AI with to get the correct results. I've seen a number youtube shorts lately doing this, where some scientist gets "refuted" by some random person based on an LLM result, they then sit with the LLM and ask the same question, get the same wrong answer, then follow it up with a clarifying question, which then the LLM realizes its mistake and gives a better answer. | ||
| ▲ | roywiggins 5 hours ago | parent [-] | |
And then ask another question, and the LLM changes its mind again ("are you sure?"). It's not actually realizing anything so much as it's following your lead. Yes, followup questions can help dislodge more information, but fundamentally you can accidentally or on purpose bully an LLM to contradict itself quite easily, and it is only incidentally about correctness. | ||