| ▲ | loeg 8 hours ago | |||||||
AI gets things wrong ("hallucinates") much more often than actual subject matter experts. This is disingenuous. | ||||||||
| ▲ | rootlocus 5 hours ago | parent [-] | |||||||
Presumably the "subject matter expert" will review the output of the LLM, just like a reviewer. I think it's disingenuous to assume that just because someone used AI they didn't look at or reviewed the output. | ||||||||
| ||||||||