| ▲ | theoldgreybeard 11 hours ago |
| The interesting tidbit here is SynthID. While a good first step, it doesn't solve the problem of AI generated content NOT having any kind of watermark. So we can prove that something WITH the ID is AI generated but we can't prove that something without one ISN'T AI generated. Like it would be nice if all photo and video generated by the big players would have some kind of standardized identifier on them - but now you're left with the bajillion other "grey market" models that won't give a damn about that. |
|
| ▲ | akersten 11 hours ago | parent | next [-] |
| Some days it feels like I'm the only hacker left who doesn't want government mandated watermarking in creative tools. Were politicians 20 years ago as overreative they'd have demanded Photoshop leave a trace on anything it edited. The amount of moral panic is off the charts. It's still a computer, and we still shouldn't trust everything we see. The fundamentals haven't changed. |
| |
| ▲ | darkwater 10 hours ago | parent | next [-] | | > It's still a computer, and we still shouldn't trust everything we see. The fundamentals haven't changed. I think that by now it should be crystal clear to everyone that it matters a lot the sheer scale a new technology permits for $nefarious_intent. Knives (under a certain size) are not regulated. Guns are regulated in most countries. Atomic bombs are definitely regulated. They can all kill people if used badly, though. When a photo was faked/composed with old tech, it was relatively easy to spot. With photoshop, it became more complicated to spot it but at the same time it wasn't easy to mass-produce altered images. Large models are changing the rules here as well. | | |
| ▲ | csallen 10 hours ago | parent | next [-] | | I think we're overreacting. Digital fakes will proliferate, and we'll freak out bc it's new. But after a certain amount of time, we'll just get used to it and realize that the world goes on, and whatever major adverse effects actually aren't that difficult to deal with. Which is not the case with nuclear proliferation or things like that. The story of human history is newer generations freaking about progress and novel changes that have never been seen before. And later generations being perfectly okay with it and adapting to a new style of life. | | |
| ▲ | darkwater 10 hours ago | parent | next [-] | | In general I concur but the adaptation doesn't come out of the blue or just only because people get used to it but also because countermeasures are taken, regulations are written and adjustments are made to reduce the negative impact. Also the hyperconnected society is still relatively new and I'm not sure we have adapted for it yet. | | |
| ▲ | Yokohiii 2 hours ago | parent [-] | | Photography and motion pictures were deemed evil. Video games made you a mass murderer. Barcodes somehow seem to affect your health or the freshness of vegetables. The earth is flat. The issue is that some people believe shit someone tells them and they deny any facts. This has been always a problem. I am all in for labeling content as AI generated. But it wont help with people trying to be malicious or who choose to be dumb. Forcing to watermark every picture made neither, it will turn into a massive problem, its a solid pillar towards full scale surveillance. Just alone the fact that analog cams become by default less trustworthy then any digital device with watermarking is terrible. Even worse, phones will eventually have AI upscaling and similar by default, you can't even make an accurate picture without anything being tagged AI. The information is eventually worthless. |
| |
| ▲ | sebzim4500 7 hours ago | parent | prev | next [-] | | I think the long term effect will be that photos and videos no longer have any evidentiary value legally or socially, absent a trusted chain of custody. | |
| ▲ | SV_BubbleTime 10 hours ago | parent | prev [-] | | It shouldn’t be that we panic about it and regulate the hell out. We could use the opportunity to deploy robust systems of verification and validation to all digital works. One that allows for proving authenticity while respecting privacy if desired. For example… it’s insane in the US we revolve around a paper social security number that we know damn well isn’t unique. Or that it’s a massive pain in the ass for most people to even check the hash of a download. Guess which we’ll do! |
| |
| ▲ | commandlinefan 7 hours ago | parent | prev | next [-] | | > a new technology permits for $nefarious_intent But people with actual nefarious intent will easily be able to remove these watermarks, however they're implemented. This is copy protection and key escrow all over again - it hurts honest people and doesn't even slow down bad people. | |
| ▲ | hk__2 10 hours ago | parent | prev [-] | | > Knives (under a certain size) are not regulated. Guns are regulated in most countries. Atomic bombs are definitely regulated I don’t think this is a good comparison: knives are easy to produce, guns a bit harder, atomic bombs definitely harder. You should find something that is as easy to produce as a knife, but regulated. | | |
| ▲ | darkwater 10 hours ago | parent | next [-] | | The "product" to be regulated here is the LLM/model itself, not its output. Or, if you see the altered photo as the "product", then the "product" of the knife/gun/bomb is the damage it creates to a human body. | |
| ▲ | wing-_-nuts 9 hours ago | parent | prev [-] | | >You should find something that is as easy to produce as a knife, but regulated. The DEA and ATF have entered the chat | | |
|
| |
| ▲ | mh- 10 hours ago | parent | prev | next [-] | | Politicians absolutely were doing this 20-30 years ago. Plenty of folks here are old enough to remember debates on Slashdot around the Communications Decency Act, Child Online Protection Act, Children's Online Privacy Protection Act, Children's Internet Protection Act, et al. https://en.wikipedia.org/wiki/Communications_Decency_Act | | |
| ▲ | SV_BubbleTime 10 hours ago | parent [-] | | It’s annoying how effective “for the children” is. That peiole really just turn off their brains for that. | | |
| ▲ | Nifty3929 4 hours ago | parent [-] | | Nobody is doing it just "for the children" - that's just a fig-leaf justification for doing what many people want anyway: surveillance, tracking, and censorship (of other people, of course - just the bad ones doing/saying bad things). IOW - People aren't turning off their brains about "for the children" - they just want it anyway and don't think any further than that. |
|
| |
| ▲ | Nifty3929 4 hours ago | parent | prev | next [-] | | In the past, and maybe even to this very day - all color printers print hidden watermarks in faint yellow ink to assist with forensic identification of anything printed. Even for things printed in B&W (on a color printer). https://en.wikipedia.org/wiki/Printer_tracking_dots Yes, can we not jump on the surveillance/tracking/censorship bandwagon please? | |
| ▲ | BeetleB 9 hours ago | parent | prev | next [-] | | Easy to say until it impacts you in a bad way: https://www.nbcnews.com/tech/tech-news/ai-generated-evidence... > “My wife and I have been together for over 30 years, and she has my voice everywhere,” Schlegel said. “She could easily clone my voice on free or inexpensive software to create a threatening message that sounds like it’s from me and walk into any courthouse around the country with that recording.” > “The judge will sign that restraining order. They will sign every single time,” said Schlegel, referring to the hypothetical recording. “So you lose your cat, dog, guns, house, you lose everything.” At the moment, the only alternative is courts simply never accept photo/video/audio as evidence. I know if I were a juror I wouldn't. At the same time, yeah, watermarks won't work. Sure, Google can add a watermark/fingerprint that is impossible to remove, but there will be tools that won't put such watermarks/fingerprints. | | |
| ▲ | mkehrt 8 hours ago | parent [-] | | Testimony is evidence. I don't think most cases have any physical evidence. | | |
| |
| ▲ | dpark 10 hours ago | parent | prev | next [-] | | I suspect watermarking ends up being a net negative, as people learn to trust that lack of a watermark indicates authenticity. Propaganda won’t have the watermark. | |
| ▲ | llbbdd 10 hours ago | parent | prev | next [-] | | Unless they've recently changed it, Photoshop will actually refuse to open or edit images of at least US banknotes. | |
| ▲ | mlmonkey 10 hours ago | parent | prev | next [-] | | You do know that every color copier comes with the ability to identify US currency and would refuse to copy it? And that every color printer leaves a pattern of faint yellow dots on every printout that uniquely identifies the printer? | | |
| ▲ | sabatonfan 10 hours ago | parent | next [-] | | Is this something strictly with the US currency notes or is the same true for other countries currency as well? | | | |
| ▲ | potsandpans 10 hours ago | parent | prev [-] | | And that's not a good thing. | | |
| ▲ | wing-_-nuts 9 hours ago | parent | next [-] | | Nope, having a stable, trusted currency trumps whatever productive use one could have for a anonymous, currency reproducing color printer | |
| ▲ | mlmonkey 10 hours ago | parent | prev | next [-] | | I'm just responding to this by OP: > Were politicians 20 years ago as overreative they'd have demanded Photoshop leave a trace on anything it edited. | |
| ▲ | fwip 10 hours ago | parent | prev | next [-] | | Why not? Like, genuinely. | | |
| ▲ | potsandpans 9 hours ago | parent [-] | | I generally don't think that's it's good or just for a government to collude with manufacturers to track/trace it's citizens without consent or notice. And even if notice was given, I'd still be against it The arguments put forward by people generally I don't find compelling -- for example, in this thread around protecting against counterfeit. The "force" applied to address these concerns is totally out of proportion. Whenever these discussions happen, I feel like they descend into a general viewpoint, "if we could technically solve any possible crime, we should do everything in our power to solve it." I'm against this viewpoint, and acknowledge that that means _some crime_ occurs. That's acceptable to me. I don't feel that society is correctly structured to "treat" crime appropriately, and technology has outpaced our ability to holistically address it. Generally, I don't see (speaking for the US) the highest incarceration rate in the world to be a good thing, or being generally effective, and I don't believe that increasing that number will change outcomes. | | |
| ▲ | fwip 5 hours ago | parent [-] | | Gotcha, thanks for the explanation. I think that personally, I agree with your stance that it's a bad kind of thing for government to do, but in practice I find that I'm in favor of the effects of this specific law. (Perhaps I need to do some thinking.) |
|
| |
| ▲ | oblio 10 hours ago | parent | prev [-] | | It depends on how you're looking at it. For the people not getting handed counterfeit currency, it's probably a good thing. | | |
| ▲ | fwip 10 hours ago | parent [-] | | Also probably good for the people trying to counterfeit money with a printer, better not to end up in jail for that. |
|
|
| |
| ▲ | rcruzeiro 10 hours ago | parent | prev | next [-] | | Try photocopying some US dollar bills. | |
| ▲ | Der_Einzige 7 hours ago | parent | prev [-] | | HN is full of authoritarian bootlickers who can't imagine that people can exist without a paternalistic force to keep them from doing bad things. |
|
|
| ▲ | losvedir 10 hours ago | parent | prev | next [-] |
| I'm sure Apple will roll something out in the coming years. Now that just anyone can easily AI themselves into a picture in front of the Eiffel tower, they'll want a feature that will let their users prove that they _really_ took that photo in front of the Eiffel tower (since to a lot of people sharing that you're on a Paris vacation is the point, more than the particular photo). I bet it will be called "Real Photos" or something like that, and the pictures will be signed by the camera hardware. Then iMessage will put a special border around it or something, so that when people share the photos with other Apple users they can prove that it was a real photo taken with their phone's camera. |
| |
| ▲ | pigpop 10 hours ago | parent | next [-] | | Does anyone other than you actually care about your vacation photos? There used to be a joke about people who did slideshows (on an actual slide projector) of their vacation photos at parties. | |
| ▲ | panarky 9 hours ago | parent | prev | next [-] | | > a real photo taken with their phone's camera How "real" are iPhone photos? They're also computationally generated, not just the light that came through the lens. Even without any other post-processing, iPhones generate gibberish text when attempting to sharpen blurry images, they delete actual textures and replace them with smooth, smeared surfaces that look like a watercolor or oil paintings, and combine data from multiple frames to give dogs five legs. | | |
| ▲ | wyre 8 hours ago | parent [-] | | Don’t be a pedant. You know very well there is a big different between a photo taken on an iPhone and a photo edited with Nano Banana. |
| |
| ▲ | omnimus 4 hours ago | parent | prev [-] | | this already exists. its called 35mm film camera. |
|
|
| ▲ | swatcoder 10 hours ago | parent | prev | next [-] |
| The incentive for commercial providers to apply watermarks is so that they can safely route and classify generated content when it gets piped back in as training or reference data from the wild. That it's something that some users want is mostly secondary, although it is something they can earn some social credit for by advertising. You're right that there will existed generated content without these watermarks, but you can bet that all the commercial providers burning $$$$ on state of the art models will gradually coalesce around some means of widespread by-default/non-optional watermarking for content they let the public generate so that they can all avoid drowning in their own filth. |
|
| ▲ | slashdev 11 hours ago | parent | prev | next [-] |
| If there was a standardized identifier, there would be software dedicated to just removing it. I don't see how it would defeat the cat and mouse game. |
| |
| ▲ | paulryanrogers 11 hours ago | parent | next [-] | | It doesn't have to be perfect to be helpful. For example, it's trivial to post an advertisement without disclosure. Yet it's illegal, so large players mostly comply and harm is less likely on the whole. | | |
| ▲ | slashdev 10 hours ago | parent [-] | | You'd need a similar law around posting AI photos/videos without disclosure. Which maybe is where we're heading. It still won't prevent it, but it would prevent large players from doing it. |
| |
| ▲ | aqme28 11 hours ago | parent | prev [-] | | I don't think it will be easy to just remove it. It's built into the image and thus won't be the same every time. Plus, any service good at reverse-image search (like Google) can basically apply that to determine whether they generated it. There will always be a way to defeat anything, but I don't see why this won't work for like 90% of cases. | | |
| ▲ | dragonwriter 10 hours ago | parent | next [-] | | > I don't think it will be easy to just remove it. No, but model training technology is out in the open, so it will continue to be possible to train models and build model toolchains that just don't incorporate watermarking at all, which is what any motivated actor seeking to mislead will do; the only thing watermarking will do is train people to accept its absence as a sign of reliability, increasing the effectiveness of fakes by motivated bad actors. | |
| ▲ | famouswaffles 10 hours ago | parent | prev | next [-] | | It's an image. There's simply no way to add a watermark to an image that's both imperceptible to the user and non-trivial to remove. You'd have to pick one of those options. | | |
| ▲ | fwip 9 hours ago | parent | next [-] | | I'm not sure that's correct. I'm not an expert, but there's a lot of literature on digital watermarks that are robust to manipulation. It may be easier if you have an oracle on your end to say "yes, this image has/does not have the watermark," which could be the case for some proposed implementations of an AI watermark. (Often the use-case for digital watermarks assumes that the watermarker keeps the evaluation tool secret - this lets them find, e.g, people who leak early screenings of movies.) | |
| ▲ | aqme28 9 hours ago | parent | prev [-] | | That is patently false. | | |
| ▲ | flir 8 hours ago | parent [-] | | So, uh... do you know of an implementation that has both those properties? I'd be quite interested in that. | | |
|
| |
| ▲ | flir 11 hours ago | parent | prev | next [-] | | > I don't think it will be easy to just remove it. Always has been so far. You add noise until the signal gets swamped. In order to remain imperceptible it's a tiny signal, so it's easy to swamp. | |
| ▲ | rcarr 10 hours ago | parent | prev | next [-] | | You could probably just stick your image in another model or tool that didn't watermark and have it regenerate the image as accurately as possible. | | |
| ▲ | pigpop 9 hours ago | parent [-] | | Exactly, a diffusion model can denoise the watermark out of the image. If you wanted to be doubly sure you could add noise first and then denoise which should completely overwrite any encoded data. Those are trivial operations so it would be easy to create a tool or service explicitly for that purpose. |
| |
| ▲ | 9 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | slashdev 10 hours ago | parent | prev | next [-] | | It would be like standardizing a captcha, you make a single target to defeat. Whether it is easy or hard is irrelevant. | |
| ▲ | VWWHFSfQ 11 hours ago | parent | prev [-] | | There will be a model trained to remove synthids from graphics generated by other models |
|
|
|
| ▲ | benlivengood 2 hours ago | parent | prev | next [-] |
| You have to validate from the other direction. Let CCD sensors sign their outputs, and digital photo-editing produce a chain of custody with further signatures. Maybe zero knowledge proofs could provide anonymity, or a simple solution is to ship the same keys in every camera model, or let them use anonymous sim-style cards with N-month certificate validity. Not everyone needs to prove the veracity of their photos, but make it cheap enough and most people probably will by default. |
|
| ▲ | zaidf 10 hours ago | parent | prev | next [-] |
| This is what C2PA is trying to do: https://c2pa.org/ |
|
| ▲ | 2 hours ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | xnx 11 hours ago | parent | prev | next [-] |
| SynthID has been in use for over 2 years. |
|
| ▲ | vunderba 10 hours ago | parent | prev | next [-] |
| Regardless of how you feel about this kind of steganography, it seems clear that outside of a courtroom, deepfakes still have the potential to do massive damage. Unless the watermark randomly replaces objects in the scene with bananas, these images/videos will still spread like wildfire on platforms like TikTok, where the average netizen's idea of due diligence is checking for a six‑fingered hand... at best. |
|
| ▲ | baby 11 hours ago | parent | prev | next [-] |
| It solves some problems! For example, if you want to run a camgirl website based on AI models and want to also prove that you're not exploiting real people |
| |
| ▲ | dragonwriter 10 hours ago | parent | next [-] | | > It solves some problems! For example, if you want to run a camgirl website based on AI models and want to also prove that you're not exploiting real people So, you exploit real people, but run your images through a realtime AI video transformation model doing either a close-to-noop transformation or something like changing the background so that it can't be used to identify the actual location if people do figure out you are exploiting real people, and then you have your real exploitation watermarked as AI fakery. I don't think this is solving a problem, unless you mean a problem for the would-be exploiter. | |
| ▲ | echelon 11 hours ago | parent | prev [-] | | Your use case doesn't even make sense. What customers are clamoring for that feature? I doubt any paying customer in the market for (that product) cares. If the law cares, the law has tools to inquire. All of this is trivially easy to circumvent ceremony. Google is doing this to deflect litigation and to preserve their brand in the face of negative press. They'll do this (1) as long as they're the market leader, (2) as long as there aren't dozens of other similar products - especially ones available as open source, (3) as long as the public is still freaked out / new to the idea anyone can make images and video of whatever, and (4) as long as the signing compute doesn't eat into the bottom line once everyone in the world has uniform access to the tech. The idea here is that {law enforcement, lawyers, journalists} find a deep fake {illegal, porn, libelous, controversial} image and goes to Google to ask who made it. That only works for so long, if at all. Once everyone can do this and the lookup hit rates (or even inquiries) are < 0.01%, it'll go away. It's really so you can tell journalists "we did our very best" so that they shut up and stop writing bad articles about "Google causing harm" and "Google enabling the bad guys". We're just in the awkward phase where everyone is freaking out that you can make images of Trump wearing a bikini, Tim Cook saying he hates Apple and loves Samsung, or the South Park kids deep faking each other into silly circumstances. In ten years, this will be normal for everyone. Writing the sentence "Dr. Phil eats a bagel" is no different than writing the prompt "Dr. Phil eats a bagel". The former has been easy to do for centuries and required the brain to do some work to visualize. Now we have tools that previsualize and get those ideas as pixels into the brain a little faster than ASCII/UTF-8 graphemes. At the end of the day, it's the same thing. And you'll recall that various forms of written text - and indeed, speech itself - have been illegal in various times, places, and jurisdictions throughout history. You didn't insult Caesar, you didn't blaspheme the medieval church, and you don't libel in America today. | | |
| ▲ | shevy-java 11 hours ago | parent [-] | | > What customers are clamoring for that feature? If the law cares, the law has tools to inquire. How can they distinguish from real people exploited to AI models autogenerating everything? I mean right now this is possible, largely because a lot of the AI videos have shortcomings. But imagine in 5 years from now on ... | | |
| ▲ | dragonwriter 7 hours ago | parent | next [-] | | > How can they distinguish from real people exploited to AI models autogenerating everything? Watermarking by compliant models doesn't help this much because (1) models without watermarking exist and can continue to be developed (especially if absence of a watermark is treated as a sign of authenticity), so you cannot rely on AI fakery being watermarked, and (2) AI models can be used for video-to-video generation without changing much of the source, so you can't rely on something accurately watermarked as "AI-generated" not being based in actual exploitation. Now, if the watermarking includes provenance information, and you require certain types of content to be watermarked not just as AI using a known watermarking system, but by a registered AI provider with regulated input data safety guardrails and/or retention requirements, and be traceable to a registered user, and... Well, then it does something when it is present, largely by creating a new content gatekeepiing cartel. | |
| ▲ | krisoft 10 hours ago | parent | prev [-] | | > How can they distinguish from real people exploited to AI models autogenerating everything? The people who care don't consume content which even just plausibly looks like real people exploited. They wouldn't consume the content even if you pinky promised that the exploited looking people are not real people. Even if you digitally signed that promise. The people who don't care don't care. |
|
|
|
|
| ▲ | domoritz 8 hours ago | parent | prev | next [-] |
| I don't understand why there isn't an obvious, visible watermark at all. Yes, one could remove it but let's assume 95% of people don't bother removing the visible watermark. It would really help with seeing instantly when an image was AI generated. |
|
| ▲ | DenisM 9 hours ago | parent | prev | next [-] |
| It would be more productive for camera manufacturers to embed a per-device digital signature. Those care to prove their image is genuine could publish both pre and post processed images for transparency. |
|
| ▲ | staplers 11 hours ago | parent | prev | next [-] |
| have some kind of standardized identifier on them
Take this a step further and it'll be a personal identifying watermark (only the company can decode). Home printers already do this to some degree. |
| |
| ▲ | theoldgreybeard 11 hours ago | parent [-] | | yeah, personally identifying undetectable watermarks are kindof a terrifying prospect | | |
| ▲ | overfeed 10 hours ago | parent [-] | | It is terrifying, but inevitable. Perhaps AI companies flooding the commons with excrement wasn't the best idea, now we all have to suffer the consequences. |
|
|
|
| ▲ | mortenjorck 10 hours ago | parent | prev | next [-] |
| Reminder that even in the hypothetical world where every AI image is digitally watermarked, and all cameras have a TPM that writes a hash of every photo to the blockchain, there’s nothing to stop you from pointing that perfectly-verified camera at a screen showing your perfectly-watermarked AI image and taking a picture. Image verification has never been easy. People have been airbrushed out of and pasted into photos for over a century; AI just makes it easier and more accessible. Expecting a “click to verify” workflow is unreasonable as it has ever been; only media literacy and a bit of legwork can accomplish this task. |
| |
| ▲ | fwip 9 hours ago | parent [-] | | Competent digital watermarks usually survive the 'analog hole'. Screen-cam resistant watermarks have been in use since at least 2020, and if memory serves, back to 2010 when I first starting reading about them, but I don't recall what it was called back then. | | |
|
|
| ▲ | echelon 11 hours ago | parent | prev | next [-] |
| This watermarking ceremony is useless. We will always have local models. Eventually the Chinese will release a Nano Banana equivalent as open source. |
| |
| ▲ | simonw 8 hours ago | parent | next [-] | | Qwen-Image-Edit is pretty good already: https://simonwillison.net/2025/Aug/19/qwen-image-edit/ | | | |
| ▲ | dragonwriter 10 hours ago | parent | prev [-] | | > We will always have local models. If watermarking becomes a legal mandate, it will inevitably include a prohibition on distributing (and using and maybe even possessing, but the distribution ban is the thing that will have the most impact, since it is the part that is most policable, and most people aren't going to be training their own models, except, of course, the most motivated bad actors) open models that do not include watermarking as a baked-in model feature. So, for most users, it'll be much less accessible (and, at the same time, it won't solve the problem.) | | |
| ▲ | ahtihn 3 hours ago | parent [-] | | I don't see how banning distribution would do anything: distributing pirated games, movies, software is banned in most countries and yet pirated content is trivial to find for anyone who cares. As long as someone somewhere is publishing models that don't watermark output, there's basically nothing that can stop those models from being used. |
|
|
|
| ▲ | gigel82 10 hours ago | parent | prev | next [-] |
| We need to be super careful with how legislation around this is passed and implemented. As it currently stands, I can totally see this as a backdoor to surveillance and government overreach. If social media platforms are required by law to categorize content as AI generated, this means they need to check with the public "AI generation" providers. And since there is no agreed upon (public) standard for imperceptible watermarks hashing that means the content (image, video, audio) in its entirety needs to be uploaded to the various providers to check if it's AI generated. Yes, it sounds crazy, but that's the plan; imagine every image you post on Facebook/X/Reddit/Whatsapp/whatever gets uploaded to Google / Microsoft / OpenAI / UnnamedGovernmentEntity / etc. to "check if it's AI". That's what the current law in Korea and the upcoming laws in California and EU (for August 2026) require :( |
|
| ▲ | NoMoreNicksLeft 10 hours ago | parent | prev | next [-] |
| I don't believe that you can do this for photography. For AI-images, if the embedded data has enough information (model identification and random seed), one can prove that it was AI by recreating it on the fly and comparing. How do you prove that a photographic image was created by a CCD? If your AI-generated image were good enough to pass, then hacking hardware (or stealing some crypto key to sign it) would "prove" that it was a real photograph. Hell, it might even be possible for some arbitrary photographs to come up with an AI prompt that produces them or something similar enough to be indistinguishable to the human eye, opening up the possibility of "proving" something is fake even when it was actually real. What you want just can't work, not even from a theoretical or practical standpoint, let alone the other concerns mentioned in this thread. |
|
| ▲ | lazide 10 hours ago | parent | prev | next [-] |
| It solves a real problem - if you have something sketchy, the big players can repudiate it, the authorities can more formally define the black market, and we can have a ‘war on deepfakes’ to further enable the authorities in their attempts to control the narratives. |
|
| ▲ | morkalork 11 hours ago | parent | prev | next [-] |
| Labelling open source models as "grey market" is a heck of a presumption |
| |
| ▲ | bigfishrunning 11 hours ago | parent | next [-] | | Every model is "grey market". They're all trained on data without complying with any licensing terms that may exist, be they proprietary or copyleft. Every major AI model is an instance of IP theft. | |
| ▲ | theoldgreybeard 11 hours ago | parent | prev [-] | | It's why I used "scare quotes". |
|
|
| ▲ | markdog12 11 hours ago | parent | prev [-] |
| I asked Gemini "dymamic view" how SynthID works: https://gemini.google.com/share/62fb0eb38e6b |