| ▲ | avdelazeri 3 hours ago | |
While I never measured it, this aligns with my own experiences. It's better to have very shallow conversations where you keep regenerating outputs aggressively, only picking the best results. Asking for fixes, restructuring or elaborations on generated content has fast diminishing returns. And once it made a mistake (or hallucinated) it will not stop erring even if you provide evidence that it is wrong, LLMs just commit to certain things very strongly. | ||
| ▲ | HPsquared 6 minutes ago | parent [-] | |
A human would cross out that part of the worksheet, but an LLM keeps re-reading the wrong text. | ||