Remix.run Logo
CoastalCoder 6 hours ago

Perhaps a more effective approach would be for their users to face the exact same legal liabilities as if they had hand-written such messages?

(Note that I'm only talking about messages that cross the line into legally actionable defamation, threats, etc. I don't mean anything that's merely rude or unpleasant.)

fl0ki 5 hours ago | parent | next [-]

This is the only way, because anything less would create a loophole where any abuse or slander can be blamed on an agent, without being able to conclusively prove that it was actually written by an agent. (Its operator has access to the same account keys, etc)

chasd00 5 hours ago | parent | prev | next [-]

just put no agent produced code in the Code of Conduct document. People are use to getting shot into space for violating that thing little file. Point to the violation and ban the contributor forever and that will be that.

marcosdumay 4 hours ago | parent | prev | next [-]

Legally, yes.

But as you pointed, not everything has legal liability. Socially, no, they should face worse consequences. Deciding to let an AI talk for you is malicious carelessness.

eshaham78 5 hours ago | parent | prev | next [-]

Liability is the right stick, but attribution is the missing link. When an agent spins up on an ephemeral VPS, harasses a maintainer, and vanishes, good luck proving who pushed the button. We might see a future where high-value open source repos require 'Verified Human' checks or bonded identities just to open a PR, which would be a tragedy for anonymity.

intended 4 hours ago | parent | prev [-]

I’d hazard that the legal system is going to grind to a halt. Nothing can bridge the gap between content generating capability and verification effort.