| ▲ | TheAceOfHearts 7 hours ago |
| I'm worried that this will lead to a Prop 65 [0] situation, where eventually everything gets flagged as having used AI in some form. Unless it suddenly becomes a premium feature to have 100% human written articles, but are people really going to pay for that? > substantially composed, authored, or created through the use of generative artificial intelligence The lawyers are gonna have a field day with this one. This wording makes it seem like you could do light editing and proof-reading without disclosing that you used AI to help with that. [0] https://en.wikipedia.org/wiki/1986_California_Proposition_65 |
|
| ▲ | tokioyoyo 7 hours ago | parent | next [-] |
| At least it would be possible to autofilter everything out. Maybe market will somehow make it possible for non-AI content to get some spotlight because of that. |
|
| ▲ | em500 7 hours ago | parent | prev | next [-] |
| > I'm worried that this will lead to a Prop 65 [0] situation, where eventually everything gets flagged as having used AI in some form. This is very predictably what's going to happen, and it will be just as useless as Prop 65 or the EU cookie laws or any other mandatory disclaimers. |
| |
| ▲ | layer8 7 hours ago | parent | next [-] | | The EU ePrivacy directive isn’t about disclaimers. | | |
| ▲ | consp 7 hours ago | parent [-] | | The problem is people believe it is. People believe the advertisement industry narrative they are force to show the insane screens and have to make it difficult. Yet they are not, and a reject all must be as easy as accept all (and "legitimate reasons" do not exist, they are either allowed uses and you don't have to ask or they are not). | | |
| |
| ▲ | codewench 7 hours ago | parent | prev [-] | | How is that useless? You adding the warning tells me everything I need to know. Either you generated it with AI, in which case I can happily skip it, or you _don't know_ if AI was used, in which case you clearly don't care about what you produce, and I can skip it. The only concern then is people who use AI and don't apply this warning, but given how easy it is to identify AI generated materials you just have to have a good '1-strike' rule and be judicious with the ban hammer. | | |
| ▲ | SkyBelow 6 hours ago | parent [-] | | Because you have to be able to prove it wasn't AI when the law is tested, and keeping records and proof you didn't use AI is going to be really difficult, if at all possible. For little people having fun, unless you poke the wrong bear, it won't matter. But for companies who are constantly the target of lawsuits, expect there to be a new field of unlabeled AI trolling comparable to patent trolling or similar. We already see this with the California label, it get's applied to things that don't cause cancer because putting the label on is much cheaper than going through to the process to prove that some random thing doesn't cause cancer. If the government showed up and claimed your comment was AI generated and you had to prove otherwise, how would you? | | |
| ▲ | shimman 19 minutes ago | parent [-] | | "One regulation was kinda bad, so we should never regulate anything again." Good god, this is pathetic. Do you financially gain from AI or do you think it's hard to prove someone didn't use it? Like this is the bare minimum and you're throwing temper tantrums... The onus will be on the AI companies pushing these wares to follow regulations. If it makes it harder for the end user to use these wares, well too bad so sad. |
|
|
|
|
| ▲ | mold_aid 7 hours ago | parent | prev [-] |
| I think a lot of people are asking the question around many digital services; I'm pretty sure in areas like education and media "no AI!" is going to be something that rich people look for, sure. Editing and proofreading are "substantial" elements of authorship. Hope these laws include criminal penalties for "it's not just this - it's that!" "we seized Tony Dokoupil's computer and found Grammarly installed," right, straight to jail |