Remix.run Logo
zwnow 6 days ago

Not only admitting, it should be law to mark anything AI generated as AI generated. Even if AI contributed just a tiny bit. I dont want to use AI slop, and I should be allowed to make informed decisions based on that preference.

scrollaway 6 days ago | parent [-]

Did you by any chance type this comment on a device that has autocorrect enabled?

jangxx 5 days ago | parent | next [-]

Autocorrect is not generative AI in the way that anyone is using that word. Also autocorrect doesn't even need to use any sort of ML model.

scrollaway 5 days ago | parent [-]

Ah yes the duality of anti-AI crowds on HN. “GenAI is just fancy autocorrect”, and “autocorrect isn’t actually GenAI”.

The thing is, if you’re talking about making laws (as GP is), your “surely people understand this difference” strategy doesn’t matter squat and the impact will be larger than you think.

jangxx 5 days ago | parent [-]

You don't seem to understand what people mean when they say "AI is just fancy autocorrect". People talk about the little word suggestions over the keyboard, not about correcting spelling. And yes, of course those suggestions are going to be provided by some sort of ML model, and yes if you actually write a whole article just using them, it should be marked as AI generated, but literally no one is doing that. Maybe because it's not fancy enough autocorrect. Either way, this is not the gotcha you think.

NewsaHackO 5 days ago | parent [-]

But the original poster said:

>Even if AI contributed just a tiny bit.

Which would imply autocorrect should be reported as AI use.

jangxx 5 days ago | parent [-]

A law like this would obviously need some sort of sensible definition of what "AI" means in this context. Online translation tools also use ML models and even systems to unlock your device with your face do, so classifying all of that as "AI contributions" would make the definition completely useless.

I assume the OP was talking about things like LLMs and diffusion models which one could definitely single out for regulatory purposes. At the end of the day I don't think it would ever be realistically possible to have a law like this anyway, at least not one that wouldn't come with a bunch of ambiguity that would need to be resolved in court.

scrollaway 5 days ago | parent [-]

OK, so define it for us, please. Because, once again, this thread is talking about introducing laws about "AI". OP was talking about LLMs you say - So SLMs then are fine? If not, then where is the boundary? If they're fine then congratulations you have created a new industry of people pushing the boundaries of what SLMs can do, as well as how they are defined.

Laws are built on definitions and this hand-wavy BS is how we got nonsense like the current version of the AI act.

jangxx 5 days ago | parent [-]

Why are you so mad at me, I'm not even the OP you should ask these questions. I'm also not convinced we need regulation like this in the first place, so I can't tell you where this boundary should be, but a boundary could certainly be found and it would be beyond simple spellchecking autocorrect.

I also don't understand why you think this would be so impossible to define. There are regulations for all kinds of areas where specific things are targeted like chemicals or drugs and just because some of these have incentivized people to slightly change a regulated thing into an unregulated thing does not mean we don't regulate these areas at all. So how are AI systems so different that you think it'd be impossible to find an adequate definition?

zwnow 5 days ago | parent | prev [-]

[flagged]