| ▲ | raincole 7 hours ago |
| > I'm surprised to see so little coverage of AI legislation news here tbh. Because no one believes these laws or bills or acts or whatever will be enforced. But I actually believe they'll be. In the worst way possible: honest players will be punished disproportionally. |
|
| ▲ | SAI_Peregrinus 3 hours ago | parent | next [-] |
| Or it'll end up like California cancer warnings: every news site will put the warning on, just in case, making it worthless. |
| |
|
| ▲ | padolsey 7 hours ago | parent | prev | next [-] |
| > Because no one believes these laws or bills or acts or whatever will be enforced. Time will tell. Texas' sat on its biometric data act quite quietly then hammered meta with a $1.4B settlement 20 years after the bill's enactment. Once these laws are enacted, they lay quietly until someone has a big enough bone to pick with someone else. There are already many traumatic events occurring downstream from slapdash AI development. |
| |
| ▲ | vulcan01 4 hours ago | parent | next [-] | | Meta made $60B in Q4 2025. A one-time $1.4B fine, 20 years after enactment, is not "getting hammered". | | |
| ▲ | Retric an hour ago | parent [-] | | They didn’t make $60B in Q4 2025 in Texas. 1.4B was 100% profit from Texas for years, that a big fine. | | |
| ▲ | saalweachter 11 minutes ago | parent | next [-] | | You also have to ask "how much is the specific thing in the lawsuit worth to Meta?" I don't know how much automatically opting everyone in to automatic photo tagging made Meta, but I assume its "less than 100% of their revenue". Barring the point of contention being integral to the business's revenue model or management of the company being infected with oppositional defiant disorder a lawsuit is just an opportunity for some middle manager + team to get praised for making a revenue-negative change that reduces the risk of future fines. Work like that is a gold mind; several people will probably get promoted for it. | |
| ▲ | ninalanyon 33 minutes ago | parent | prev [-] | | Big for Texas, not for Meta. |
|
| |
| ▲ | Ajedi32 4 hours ago | parent | prev | next [-] | | That's even worse, because then it's not really a law, it's a license for political persecution of anyone disfavored by whoever happens to be in power. | | |
| ▲ | dylan604 3 hours ago | parent [-] | | Never mind the damage that was willfully allowed to happen that the bill was supposed to protect from happening. |
| |
| ▲ | OGEnthusiast 3 hours ago | parent | prev | next [-] | | > Texas' sat on its biometric data act quite quietly then hammered meta with a $1.4B settlement 20 years after the bill's enactment. Sounds like ignoring it worked fine for them then. | |
| ▲ | jandrese 2 hours ago | parent | prev [-] | | That sounds like it will be in the courts for ages before Facebook wins on selective prosecution. |
|
|
| ▲ | Galanwe 6 hours ago | parent | prev | next [-] |
| How about a pop-up on websites, next to the tracking cookie ones, to consent reading AI generated text? I see a bright future for the internet |
|
| ▲ | tedggh an hour ago | parent | prev | next [-] |
| Probably worse than that. I can totally see it being weaponized. A media company critic o a particular group or individual being scrutinized and fined. I haven’t looked at any of these laws, but I bet their language gives plenty of room for interpretation and enforcement, perhaps even if you are not generating any content with AI. |
|
| ▲ | cheschire 7 hours ago | parent | prev | next [-] |
| Yeah it’s like that episode of schoolhouse rock about how a bill becomes a law now takes place in squid games. |
|
| ▲ | cucumber3732842 3 hours ago | parent | prev | next [-] |
| >But I actually believe they'll be. In the worst way possible: honest players will be punished disproportionally. As with everything else BigCo with their legal team will explain to the enforcers why their "right up to the line if not over it" solution is compliant and mediumco and smallco will be the ones getting fined or being forced to waste money staying far from the line or paying a 3rd party to do what bigco's legal team does at cost. |
|
| ▲ | crimsonsupe 7 hours ago | parent | prev | next [-] |
| > Because no one believes these laws or bills or acts or whatever will be enforced. That’s because they can’t be. People assume they’ve already figured out how AI behaves and that they can just mandate specific "proper" ways to use it. The reality is that AI companies and users are going to keep refining these tools until they're indistinguishable from human work whenever they want them to be. Even if the models still make mistakes, the idea that you can just ban AI from certain settings is a fantasy because there’s no technical way to actually guarantee enforcement. You’re essentially passing laws that only apply to people who volunteer to follow them, because once someone decides to hide their AI use, you won't be able to prove it anyway. |
| |
| ▲ | chrisjj 5 hours ago | parent | next [-] | | > the idea that you can just ban AI from certain settings is a fantasy because there’s no technical way to actually guarantee enforcement. By that token bans on illegal drugs are fantasy. Whereas in fact, enforcement doesn't need to be guaranteed to be effective. There may be little technical means to distinguish at the moment. But could that have something to do with lack of motivation? Let's see how many "AI" $$$ suddenly become available to this once this law provides the incentive. | | |
| ▲ | amanaplanacanal 4 hours ago | parent [-] | | > By that token bans on illegal drugs are fantasy. I think you have this exactly right. They are mostly enforced against the poor and political enemies. |
| |
| ▲ | rconti an hour ago | parent | prev | next [-] | | Sure they can be enforced. Your comment seems to be based on the idea of detecting AI writing from the output. But you can enforce this law based on the way content is created. The same way you can enforce food safety laws from conditions of the kitchen, not the taste of the food. Child labor laws can be enforced. And so on. Unless you're trying to tell me that writers won't report on their business that's trying to replace them with AI. | |
| ▲ | 6LLvveMx2koXfwn 6 hours ago | parent | prev | next [-] | | > You’re essentially passing laws that only apply to people who volunteer to follow them . . Like every law passed forever (not quite but you get the picture!) [1] 1. https://en.wikipedia.org/wiki/Consent_of_the_governed | |
| ▲ | songodongo 7 hours ago | parent | prev | next [-] | | And you can easily prompt your way out of the typical LLM style. “Written in the style of Cormac McCarthy’s The Road” | | |
| ▲ | capnrefsmmat 6 hours ago | parent [-] | | No, that doesn't really work so well. A lot of the LLM style hallmarks are still present when you ask them to write in another style, so a good quantitative linguist can find them: https://hdsr.mitpress.mit.edu/pub/pyo0xs3k/release/2 That was with GPT4, but my own work with other LLMs show they have very distinctive styles even if you specifically prompt them with a chunk of human text to imitate. I think instruction-tuning with tasks like summarization predisposes them to certain grammatical structures, so their output is always more information-dense and formal than humans. |
| |
| ▲ | wwfn 6 hours ago | parent | prev | next [-] | | > passing laws that only apply to people who volunteer to follow them That's a concerning lens to view regulations. Obviously true, but for all laws. Regulations don't apply to only to what would be immediately observable offenses. There are lots of bad actors and instances where the law is ignored because getting caught isn't likely. Those are conspiracies! They get harder to maintain with more people involved and the reason for whistle-blower protections. VW's Dieselgate[1] comes to mind albeit via measurable discrepancy. Maybe Enron or WorldCom (via Cynthia Cooper) [2] is a better example. [1]: https://en.wikipedia.org/wiki/Volkswagen_emissions_scandal
[2]: https://en.wikipedia.org/wiki/MCI_Inc.#Accounting_scandals | |
| ▲ | delaminator 6 hours ago | parent | prev | next [-] | | C2PA-enabled cameras (Sony Alpha range, Leica, and the Google Pixel 10) sign the digital images they record. So legislators, should they so choose, could demand source material be recorded on C2PA enabled cameras and produce the original recordings on demand. | |
| ▲ | conartist6 6 hours ago | parent | prev | next [-] | | Indistinguishable, no. Not these tools. Without emotion, without love and hate and fear and struggle, only a pale imitation of the human voice is or will be possible. | |
| ▲ | Forgeties79 5 hours ago | parent | prev [-] | | The idea that you can just ban drinking and driving is a fantasy because there’s no technical way to actually guarantee enforcement. I know that sounds ridiculous but it kind of illustrates the problem with your logic. We don’t just write laws that are guaranteed to have 100% compliance and/or 100% successful enforcement. If that were the case, we’d have way fewer laws and little need for courts/a broader judicial system. The goal is getting most AI companies to comply and making sure that most of those that don’t follow the law face sufficient punishment to discourage them (and others). Additionally, you use that opportunity to undo what damage you can, be it restitution or otherwise for those negatively impacted. |
|
|
| ▲ | just_once 7 hours ago | parent | prev | next [-] |
| What does that look like? Can you describe your worst case scenario? |
| |
| ▲ | jandrese 2 hours ago | parent | next [-] | | Highly selective enforcement along partisan lines to suppress dissent. Government officials forcing you to prove that your post is not AI generated if they don't like it. Those same officials claiming that it is AI generated regardless of the facts on the ground to have it removed and you arrested. | | |
| ▲ | idle_zealot an hour ago | parent [-] | | If you assume the use of law will be that capricious in general, then any law at all would be considered too dangerous for fear of use as a partisan tool. Why accuse your enemies of using AI-generated content in posts? Just call them domestic terrorists for violently misleading the public via the content of their posts and send the FBI or DHS after them. A new law or lack thereof changes nothing. |
| |
| ▲ | amelius 7 hours ago | parent | prev [-] | | Worst case? Armed officers entering your home without warrant, taking away your GPU card? | | |
| ▲ | just_once 6 hours ago | parent [-] | | They can do that anyway. What does that have to do with the content of the proposed law? |
|
|
|
| ▲ | sumeno 5 hours ago | parent | prev [-] |
| Who are the honest players generating AI slop articles |
| |
| ▲ | chrisjj 5 hours ago | parent [-] | | The ones honestly labelling their articles e.g. "AI can make mistakes". Full marks to Google web search for leading the way! |
|