| ▲ | a2128 4 hours ago |
| AI companies love to hype up how AI will provide a great benefit to the economy and transform intellectual labor, but I hardly see any discussion about how much damage it will cause to the economy when you can no longer trust that you're on a video call with an actual person. Maybe the person you're interviewing is actually an AI impersonating someone, or maybe they never existed in the first place. Information found online will also no longer be trustable, footage of some incident somewhere may have been entirely fabricated by AI, and we already experience misleading articles today. Money will have to be wasted on unnecessary flights to see stuff or meet people in-person instead of video, and the availability of actual information will become more and more limited as the sea of online information gets polluted with crap. It may never be possible to calculate the full extent of the damage in monetary value. |
|
| ▲ | thisisit 2 hours ago | parent | next [-] |
| Laws will be passed to make it "safer". Just like it is happening with the id verification systems. Every image or video gen will require a watermark. Something visible which cannot be removed easily or hidden which can be detected and blocked. Access to models which do not comply will be made harder through id verification checks or something. There will be some regulatory capture in between. World will kick into gear only when something really bad happens. Maybe a influential person - rich or politician - fooled into doing something catastrophic due to a deepfake video/image. Until then normal people being affected isn't going to move the needle. |
| |
| ▲ | red-iron-pine 43 minutes ago | parent | next [-] | | > Laws will be passed to make it "safer". Just like it is happening with the id verification systems. Every image or video gen will require a watermark. Something visible which cannot be removed easily or hidden which can be detected and blocked. Access to models which do not comply will be made harder through id verification checks or something. i've thought about this off and on and how to implement it. Not easily, was my general takeaway. or rather, it's easily to implement but you're in a adversarial relationship with bad actors and easy implementations may be easily broken e.g. your certs gotta come from somewhere and stay protected, and how do you update and control them. key management for every single camera on every phone, etc. | |
| ▲ | Miraste an hour ago | parent | prev [-] | | Verification needs to work the other way around, some kind of verifiable chain of trust for photos and videos from real cameras. Watermarking all generated media is impossible. | | |
| ▲ | SirMaster 42 minutes ago | parent | next [-] | | I don't really understand why this is so hard or why it wasn't just done from the get go. Just have Apple and Google digitally sign videos and photos recorded from phones and then have Google and Meta, etc display that they are authentic when shown on their platforms. | | |
| ▲ | alpha_squared 21 minutes ago | parent | next [-] | | You're talking about the metadata of the files, which can always be edited and someone will inevitably try to make software to do exactly that. Also, Adobe's proposal for handling generated content is exactly this and they're not able to get buy-in from other companies. | | |
| ▲ | SirMaster 18 minutes ago | parent [-] | | Edit the metadata in what way? It's a cryptographic hash. If the bits that make up the video as was recorded by the camera don't match the hash anymore, then you know it was modified. That doesn't mean it's fake, it just means use skepticism when viewing. On the other hand the ones that have not been modified and still match can be trusted. |
| |
| ▲ | Miraste 21 minutes ago | parent | prev [-] | | It becomes a hard problem quickly when you introduce editing, and most photos and videos on social media are edited. I'm not sure how it would work. It seems more feasible than universal watermarks, though. |
| |
| ▲ | petesergeant 22 minutes ago | parent | prev [-] | | You can bootstrap some of it. I wrote the following for solving this ~9 years ago. Kinda wish I'd done the PhD now: https://github.com/pjlsergeant/multimedia-trust-and-certific... |
|
|
|
| ▲ | roflmaostc 4 hours ago | parent | prev | next [-] |
| Partially agree.
However, this problem has existed with scam e-mails since the 90s. For me the solution is in signed e-mails and signed documents. If the person invites me to a online meeting with a signed e-mail, I trust that person that it's really them. Same for footage of wars, etc. The journalist taking it basically signs the videos and verifies it's authenticity. It is AI generated, then we would loose trust in that person and wouldn't use their material anymore. |
| |
| ▲ | TheOtherHobbes 4 hours ago | parent | next [-] | | How do you prove the signature isn't fake? Ultimately ID requires either a government ID service, a third party corporate ID service, or some kind of open hybrid - which doesn't exist. All of those have their issues. | | |
| ▲ | olmo23 3 hours ago | parent | next [-] | | I think he was referring to a cryptographic signature, possibly using the "web of trust" to get the key. I'm not convinced we need central authority to solve this. | |
| ▲ | tenacious_tuna 3 hours ago | parent | prev | next [-] | | people at my org were gleeful when they learned they could hook LLMs into Slack. Even if we had some reliable, well-used signature system, I think people would just let AI use it to send emails on their behalf. | | |
| ▲ | Ajedi32 2 hours ago | parent | next [-] | | That's a different problem though. It's doing it on their behalf, not on behalf of a scammer who's impersonating them. | | | |
| ▲ | bigfishrunning 3 hours ago | parent | prev | next [-] | | If the AI age has taught me anything, it's that most people do not care what their output is. They'll put their name on anything, taste or quality does not matter in the least. It's incredibly depressing. | |
| ▲ | MarsIronPI an hour ago | parent | prev [-] | | Well we should treat that as their own output. If it's crap, treat it the same way you would if they produced the crap themselves. |
| |
| ▲ | SirMaster 40 minutes ago | parent | prev [-] | | Same way security cameras prove that they are authentic camera recordings that have not been modified. If modified, the video will no longer match the signature that was generated with it. |
| |
| ▲ | mk89 3 hours ago | parent | prev | next [-] | | There are people hosting agents online to talk to other agents etc. on their behalf. How difficult is it to just instruct such an agent to do the tasks you mentioned? You're assuming it's done by "bad actors" while it's most likely just going to be done by "everyone" that knows how to do it. | |
| ▲ | strogonoff an hour ago | parent | prev | next [-] | | As with any problem, scale changes its nature. With cash, you can only steal so much (or have transactions of up to certain size) until you run into geographical and physical constraints. With cryptocurrency, it’s possible to lose any amount. With humans writing scam emails, you can only have so many of them until one blows the whistle. With LLMs, a single person can distribute an arbitrary amount. At some point, quantity becomes a new quality, and drawing a parallel becomes disingenuous because the new quality has no precedent in human history. | | |
| ▲ | pixl97 17 minutes ago | parent [-] | | > (or have transactions of up to certain size) And by that you mean tens of millions to billions right? Bank transfer scamming/fraud is a thing. | | |
| ▲ | strogonoff 4 minutes ago | parent [-] | | The highlighted parallel is usually drawn between cryptocurrency and cash, not between cryptocurrency and banks. With both cash and cryptocurrency, as is the idea behind the analogy, 1) there’s no intermediary and 2) once it’s gone, it’s gone. Obviously, the banking system is not immune to fraud (not sure why you think I made that claim, unless your definition of “cash” includes electronic transfers), but banks and payment systems can (and do) resolve these cases and have certain KYC requirements. |
|
| |
| ▲ | Forgeties79 4 hours ago | parent | prev | next [-] | | Spam emails in the 90’s don’t come remotely close to the operations people can set up by themselves with AI now. It doesn’t even compare. | |
| ▲ | hansonkd 2 hours ago | parent | prev [-] | | I mean emails were and still are a huge security risk. Sometimes I'm more scared of employees opening and engaging with emails than I am than anything else. |
|
|
| ▲ | friendzis 2 hours ago | parent | prev | next [-] |
| > Information found online will also no longer be trustable Most information you can access publicly, including Wikipedia, is a result of astroturfing fight. Most information online had not been trustable for double digit number of years now. > we already experience misleading articles today Again, had been happening for decades. > footage of some incident somewhere may have been entirely fabricated by AI Not like we did not already have doctored footage plaguing the public. > Money will have to be wasted on unnecessary flights to see stuff or meet people in-person instead of video Necessity to inspect the supply chain for snake oil has been a thing since at least EA (the Nasir one). We may be dealing with the problem of spam, but the problems have already been there. |
| |
| ▲ | pstuart 2 hours ago | parent [-] | | All these are true, but just as it happened before the internet, it's accelerating even further. There are clear costs that cannot just be hand waved away. | | |
| ▲ | ottah an hour ago | parent [-] | | I'm not sure we can say it's accelerating. The techniques that adversarial actors use has always been changing and when they shift tactics it can take a while for an adequate defense is adopted. We're still dealing with sql injection in the owasp top ten. What I think would indicate an acceleration is when the most security oriented organizations continuously fail to defend against new attacks. If we start hearing about JPMorgan and Google getting popped every month or two, we're in trouble. | | |
| ▲ | ACS_Solver 29 minutes ago | parent [-] | | The acceleration is in the decrease of the cost to produce misinformation. Misinformation in pure text form has always been cheapest, but is even cheaper now that text generation is basically a solved problem. Photos have been more expensive, it used to take time and skill with a photo editor to produce a believable image of an event that never happened. The cost is now very low, it's mostly about prompting skills. Fake videos were considerably harder, especially coupled with speech. Just a few years ago I could assume any video I saw was either real or a time-consuming, deliberate fake. We've now entered a time where fake videos of famous people take actual effort to tell apart, and can be produced for a low cost - something accessible to an individual, not a big corporation. We can have an entirely fake video of Trump, or another world leader, giving a speech and it will look like the real thing, with the audiovisual "tells" of it being fake getting harder to notice every few months. |
|
|
|
|
| ▲ | collinmcnulty 3 hours ago | parent | prev | next [-] |
| "Is this a deepfake video call" is a major plot point in a pretty big movie currently in theaters, so I think this is getting into the broader zeitgeist. |
|
| ▲ | chistev 3 hours ago | parent | prev | next [-] |
| We are still in the early stage of AI and already I struggle to tell what is real or fake on my Twitter feed. It will only get better in its deception with time. You know those incriminating Epstein photos with his associates? A few years from now a common defense from people like that would be that the photos were AI generated, and it would be difficult to prove them wrong beyond reasonable doubt. People in previous cases already attempted to dismiss incriminating pics of themselves as being the work of clever Photoshop artists. |
|
| ▲ | Forgeties79 4 hours ago | parent | prev | next [-] |
| > footage of some incident somewhere may have been entirely fabricated by AI, Or the opposite, where people attempt to get out of trouble by calling real evidence into question by calling it “AI” |
| |
|
| ▲ | whateverboat 4 hours ago | parent | prev | next [-] |
| What's the solution apart from an identity providing service? |
| |
| ▲ | a2128 4 hours ago | parent | next [-] | | I don't know of a solution. I don't think even identity verification will meaningfully solve this. People will get hacked, or provide their SEO-spamming agent with their own identity, or purposefully post fake videos under their own identity. As it becomes more normal to scan your ID to access random websites, it will also become easier to steal people's identities and the value of identity verification will go down. | | |
| ▲ | intrasight 4 hours ago | parent | next [-] | | People don't get hacked - devices get hacked. So all we need is a better chain of trust between two people. This is not a technology development problem as much as a technology implementation problem. And a political problem | | |
| ▲ | bigfishrunning 3 hours ago | parent | next [-] | | People get hacked -- a device could be flawless, but if a person is a victim of "Social Engineering" and hands the attacker a password, there's nothing the designer of the device could do about it. | | |
| ▲ | soco 3 hours ago | parent [-] | | 2FA has tried to solve exactly this. Not many attacked people will hand over their password AND their phone. Yes I know, they might hand over one authentication code (and I know people who did exactly that)... We should also look into reducing the attack surface - if you get Instagram hacked you shouldn't get your Facebook hacked as well. But the current big tech centralization leads us to that single point of failure, because they don't care about the user's concerns only market grab. So... what now? Do we get the politics into this? | | |
| ▲ | bigfishrunning 2 hours ago | parent [-] | | One authentication code is often all that's needed to *change where the authentication codes are sent* Not to mention that most 2FA still uses SMS, which has it's own well-understood security flaws. |
|
| |
| ▲ | prox 2 hours ago | parent | prev [-] | | Best thing I think of is domain names. Domains are tied to addresses and billing, and sites are people or businesses, with physical locations one can visit. Maybe a good startup idea would be “local verify” , where you check locally for a client if the online destination is real. |
| |
| ▲ | nathanaldensr 4 hours ago | parent | prev [-] | | Agreed. The sphere of trust around each of us will shrink back to only those in our physical proximity. Outside of that, no one can be trusted. |
| |
| ▲ | jjulius an hour ago | parent | prev | next [-] | | Touching grass. Valuing in-person connections. Focusing on the community, meatspaces and actual people around you. Getting off of the Internet and off of our devices. It's not just a solution to AI/LLMs modifying our reality but also a solution to [gestures wildly at the cultural, societal and global communication impacts of the past ~16 years]. This sentiment is unpopular, but it's true. Prioritize true connections and experiences. | |
| ▲ | Gigachad 4 hours ago | parent | prev | next [-] | | I’m seeing a huge increase in companies requiring in person interviews now. Seems there is a real possibility the internet as we know it will be destroyed. | | |
| ▲ | dominotw 4 hours ago | parent | next [-] | | linkedin is completely destroyed now. There are tons of ai bots there but real humans are now fronts for AI. So you cant even trust content from from ppl you know. identity serivce is not useful because that person might be a real person but they might just be a pipe to ai like we see on linkedin. | |
| ▲ | rkomorn 4 hours ago | parent | prev [-] | | I think you might be right and I think I'll like some of the consequences and hate some of the others. More in-person stuff feels like a win to me (and I say this as someone who probably counts as introverted). Not being able to trust any online interactions anymore? Seems like a new height in what was already a negative. |
| |
| ▲ | adithyassekhar 4 hours ago | parent | prev [-] | | That's just shifting the problem not solving it. |
|
|
| ▲ | nslsm 3 hours ago | parent | prev | next [-] |
| If anything deepfakes will be good for the economy because if you can’t do business with people who are far away it becomes harder to outsource. |
| |
|
| ▲ | esafak 2 hours ago | parent | prev | next [-] |
| It is already a problem. Try interviewing people from LinkedIn and you'll face an onslaught of imposters. https://www.darkreading.com/remote-workforce/north-korean-op... |
|
| ▲ | thunky 4 hours ago | parent | prev [-] |
| > damage it will cause to the economy when you can no longer trust that you're on a video call with an actual person What damage are you talking about? I'm not sure I understand why it matters that there is no real person there if you can't actually tell the difference. You're just demonstrating that you don't actually need a human for whatever it is you're doing. |
| |
| ▲ | bigfishrunning 3 hours ago | parent | next [-] | | Your wife or mother calls you or video calls you and says to meet her somewhere, or to send money, or to pick up groceries or whatever. Does it not matter that it wasn't her? Could it be someone trying to manipulate you into going somewhere, to be robbed or whatever? At any rate, you'll need to verify that information came from the source you trust before you act on it, and that verification has a cost. The damage is to the trust we have in our communication media. The conclusion here is that every person is trivial to impersonate; that's the damage. | | |
| ▲ | thunky 3 hours ago | parent [-] | | Not disagreeing, but the context of GP was business/economy/hiring. Also it was already possible for someone to impersonate your mother via text or similar, and even easier to pull off. | | |
| ▲ | bigfishrunning 2 hours ago | parent | next [-] | | Ok fine, let's put it in the context of business. Your competitor impersonates your customer, gives you bad instructions. After following the bad instructions, you lose the contract with your customer, and your competitor (the attacker) is free to try and replace you. If you got a suspicious text, the logical thing is to call up the person who sent it and try to verify it. AI impersonation makes that much harder. | | |
| ▲ | thunky 2 hours ago | parent | next [-] | | > If you got a suspicious text, the logical thing is to call up the person who sent it and try to verify it The communication channel is what you trust. So you would call the person using that trusted channel. It's just like when you get a scam email or popup from "Microsoft" saying your laptop is compromised and you need to call their number ASAP. | |
| ▲ | Habgdnv 2 hours ago | parent | prev [-] | | Or even better, open the on-prem AI portal and type something like "I just got a suspicious call from client X, but I am on a lunch break. Call him and use a fake video of me. Ask him if what he said is true..." |
| |
| ▲ | contagiousflow 3 hours ago | parent | prev [-] | | You don't think people getting scammed is part of the economy? |
|
| |
| ▲ | rdevilla 3 hours ago | parent | prev | next [-] | | Because what you are actually doing is exchanging symbols, tokens, if you will, that may be redeemed in a future meatspace rendezvous for a good or service (e.g. a job, a parcel). These tokens are handshakes, contracts, video calls, etc. to be exchanged for the actual things merely represented therein. Instead what we have now with AI is people exchanging merely the tokens and being contented with the symbol in-and-of itself, as something valuable in its own right, with no need for an actual candidate or physical product underlying the symbol. There is a clip by McLuhan I can't be assed to find right now where he says eventually people will stop deriving pleasure from the products themselves and instead derive the feelings of (projected) accomplishment and pleasure from viewing advertisements about the product. The product itself becomes obsolete, for all you actually need to evoke the desired response is the advertisement, or the symbol. A hiring manager interviewing an AI and offering it a job is like buying the advertisement you just watched, and.... that's it. No more, the transaction is complete. | | |
| ▲ | pixl97 2 minutes ago | parent [-] | | >McLuhan Hmm, this guy may have been on to something >Instead of tending towards a vast Alexandrian library the world has become a computer, an electronic brain, exactly as an infantile piece of science fiction. And as our senses have gone outside us, Big Brother goes inside. So, unless aware of this dynamic, we shall at once move into a phase of panic terrors, exactly befitting a small world of tribal drums, total interdependence, and superimposed co-existence. [...] Terror is the normal state of any oral society, for in it everything affects everything all the time. [...] In our long striving to recover for the Western world a unity of sensibility and of thought and feeling we have no more been prepared to accept the tribal consequences of such unity than we were ready for the fragmentation of the human psyche by print culture. --The Gutenberg Galaxy, 1962 |
| |
| ▲ | skydhash 3 hours ago | parent | prev | next [-] | | > What damage are you talking about? Not GP, but there's a lot of damage that can be done with impersonation. | |
| ▲ | chii 3 hours ago | parent | prev | next [-] | | The grandparent post has the belief that human interaction is intrinsically better. Not sure i agree, but i can understand the POV. However, the increase in fake videos that are difficult to tell from real is indeed a potential issue. But the fact that misinformation today is already so prevalent is evidence that better video doesn't make it any worse than it already is imho. | | |
| ▲ | collinmcnulty 2 hours ago | parent [-] | | You're not sure if human to human interaction is intrinsically more valuable than a human talking to a facsimile? That feels like a very dangerous position to hold for one's ethical calculations and general sanity. I'm clinging tightly to the value of the bond with other people, even the passing connection, but certainly with my family members as this article is about. |
| |
| ▲ | esseph 3 hours ago | parent | prev [-] | | Imagine how this plays out in courtrooms the world over for evidence. We're in deep shit. |
|