| ▲ | autoexec 2 hours ago | |
> It was against their policy to use AI in producing any part of the final article, and the writer was aware of that. More than that, as a reporter on AI he should have been fully aware that AI frequently bullshits and lies. He should have known it was not reliable and that its output needs to be carefully verified by a human if you care at all about the accuracy or quality of what it gives you. His excuse that this was done in a fever induced state of madness feels weak when it was his whole job to know that AI was not an appropriate tool for the task. | ||
| ▲ | Barbing an hour ago | parent [-] | |
>his whole job Possibly akin to a roofer taking a shortcut up there, then taking a spill? You knew better but unfortunately let the fact that you could probably get away with it with zero impact decide for you. IIRC hallucinations were essentially kicked off initially by user error, or rather… let’s say at least: a journalist using the best available technologies should have been able to reduce the chance of this big of an issue to near zero, even with language models in the loop & without human review. (e.g. imagine Karpathy’s llm-council with extra harnessing/scripting, so even MORE expensive, but still. Or some RegEx!) | ||