| ▲ | gortok 7 hours ago | |||||||
> So if this is a tool, the fault lies fully in the user, and if this is treated as “another persons work” then the user knowingly passed the work onto someone not authorized to do it. Both end up in the user being guilty. I am particularly against this point of view, because we as a community have long touted how computers can do the job better and faster, and that computers don’t make mistakes. When there are bugs, they’re seen as flaws in the system and rectified, by programmers. When there are gaps between user expectations and how the software works, it’s our job to manage those gaps and reduce the gap. In the case of AI, we are somehow, probably because we know it’s non-deterministic, turning that social contract we had developed with users on its head. Now, that’s just the way it is and it’s up to them to know if the computer is lying to them. We have absolved ourselves of both the technical and the non-technical responsibilities to ensure the computer doesn’t lie to the user, or subverts their expectations, or acts in a way contrary to human logic. AI may be different to us in that it’s non-deterministic, but that’s all the more reason that we’re responsible to ensure AI adoption aligns to the social contract we created with users. If we can’t do that with AI then it’s up to us to stop chasing endless dollars and be forthright with users that facts are optional when it comes to AI. | ||||||||
| ▲ | chrisjj 6 hours ago | parent [-] | |||||||
> we as a community have long touted ... that computers don’t make mistakes. No community I know. Otherwise, I agree. | ||||||||
| ||||||||