| ▲ | alpinisme 4 hours ago | |
The consequences for wrong ai need to be a lot higher if we want to limit slop. Of course, there’s space for llms and their hallucinations to contribute meaningful things, but we need at least a screaming all caps disclaimer on content that looks like it could be human-generated but wasn’t (and absent that disclaimer or if the disclaimer was insufficiently prominent, false statements are treated as deliberate fraud) | ||