| ▲ | sillyfluke 5 hours ago | |
>To be fair, I don't think it is an AI problem, more of a quirk of formal communication, the same happen with human secretaries. Obviously you're not a golfer. Human sectretaries don't have non-deterministic hallucinations and random critical ommissions in their summaries, which I've witnessed first hand with LLMs. More importantly, if they do you have more deterministic mitigations with them than you do with LLMs, as there are no mitigations with LLMs except praying that a new model in some unspecified future will be magically better with the summaries down the line. The only way to stay sane when using these tools is to pretend that these things won't ever happen and just go about your business like the rest of the zombie workforce, because no one wants to stop the train and address the issue. There is a reason why the title of Dr.Strangelove is "How I Learned to Stop Worrying and Love the Bomb". | ||