Remix.run Logo
nightski 3 hours ago

[flagged]

estearum 3 hours ago | parent | next [-]

Quite telling that you think a technology that merely prevents you from passing off an AI-generated image as not-AI-generated makes the model "worthless."

Good!

That's the point! Whatever amazing use case you had in mind is bad and I'm glad SynthID (apparently) makes it impossible.

nightski 2 hours ago | parent [-]

Actually no it just makes me use a different model. My uses are not nefarious at all, although it's fine for you to assume so. There are real, legitimate reasons why SynthId is actively harmful that do not involve deceiving or manipulating people at all. SynthId is just a stain on legitimate AI users. People who want to deceive or manipulate are not using Google models anyways. They are going to use a model without safety rails (which is not what I am advocating for per se, just that SynthId is an awful solution).

It actually reeks of Google, since it's a technical solution to a people problem. Google doesn't seem to understand people.

procinct an hour ago | parent | next [-]

Can you explain your use case? I’d be interested to understand.

nightski 42 minutes ago | parent [-]

Every legitimate use case for AI. It is a way to mark legitimate work done using AI tools as inferior.

This might be acceptable if it prevented or limited nefarious use cases. But it does no such thing. It doesn't help at all on that front actually and is not a problem that can be solved by technology alone.

I view SynthId as more of a method of control. It's a way for Google to label work produced by an individual using their tools as their own.

I much prefer open models that let me be creative, write code, etc.. without trying to control/track/mark me.

estearum 2 hours ago | parent | prev [-]

> There are real, legitimate reasons why SynthId is actively harmful that do not involve deceiving or manipulating people at all

I am legitimately curious: can you name some?

> Actually no it just makes me use a different model

Yes, this is a very good thing when "a different model" means "a worse model."

> People who want to deceive or manipulate are not using Google models anyways. They are going to use a model without safety rails

That's totally invalid logic. There are plenty of deception and manipulation use cases that don't run afoul of model safety rails at all. Trivially: Creating fake dating profiles to scam people. Fake product images. Fake insurance claims. Fake blackmail (e.g. of a person and another man/woman at a bar).

nightski an hour ago | parent [-]

It doesn't mean a worse model. It may mean that at certain points in time now which tend to be very short lived, but model advancement will hit diminishing returns and at that point models will become commoditized. But even now which model is best is not always the SynthId models from Google.

In fact, the only thing allowing differentiation now is how compute heavy current architectures are. It's very possible this will turn out to not be necessary.

Also my logic was not "Nefarious uses require no safety rails". That was your logic you injected into the conversation. I was merely saying that nefarious users were more likely to use models with safety rails off.

estearum 41 minutes ago | parent [-]

Can't you provide a few (or even one) example of a legitimate use case that SynthID destroys?

DalasNoin 3 hours ago | parent | prev | next [-]

Why does SynthID make it worthless? it helps other platforms detect this as ai?

zardo 3 hours ago | parent [-]

If the value is in deception.

csjh 3 hours ago | parent | prev [-]

What’s the downside of SynthID?