Remix.run Logo
delichon 6 hours ago

> In addition, the bill contains language that requires news organizations to create safeguards that protect confidential material — mainly, information about sources — from being accessed by AI technologies.

So clawdbot may become a legal risk in New York, even if it doesn't generate copy.

And you can't use AI to help evaluate which data AI is forbidden to see, so you can't use AI over unknown content. This little side-proposal could drastically limit the scope of AI usefulness over all, especially as the idea of data forbidden to AI tech expands to other confidential material.

InsideOutSanta 6 hours ago | parent [-]

This seems like common sense. I'm running OpenClaw with GLM-4.6V as an experiment. I'm allowing my friends to talk to it using WhatsApp.

Even though it has been instructed to maintain privacy between people who talk to it, it constantly divulges information from private chats, gets confused about who is talking to it, and so on.^ Of course, a stronger model would be less likely to screw up, but this is an intrinsic issue with LLMs that can't be fully solved.

Reporters absolutely should not run an instance of OpenClaw and provide it with information about sources.

^: Just to be clear, the people talking to it understand that they cannot divulge any actual private information to it.