| ▲ | femto 2 hours ago | |
I think it's because LLM output tends to be snippets of truth, connected in such a way as to be subtly false. It leaves you disoriented because it is in the uncanny valley of truth: you know it's not right, but can't put your finger on why it is not right. It's made worse buy its asymmetric nature: it takes seconds to generate pages of words and hours to figure out what is wrong with them, for little to no gain to the reader. | ||