Remix.run Logo
seizethecheese 5 hours ago

> There’s an extremely hurtful narrative going around that my product, a revolutionary new technology that exists to scam the elderly and make you distrust anything you see online, is harmful to society

The article is certainly interesting as yet another indicator of the backlash against AI, but I must say, “exists to scam the elderly” is totally absurd. I get that this is satire, but satire has to have some basis in truth.

I say this as someone whose father was scammed out of a lot of money, so I’m certainly not numb to potential consequences there. The scams were enabled by the internet, does the internet exist for this purpose? Of course not.

muvlon 5 hours ago | parent | next [-]

The article names a lot of other things that AI is being used for besides scamming the elderly, such as making us distrust everything we see online, generating sexually explicit pictures of women without their consent, stealing all kinds of copyrighted material, driving autonomous killer drones and more generally sucking the joy out of everything.

And I think I'm inclined to agree. There are a small amount of things that have gotten better due to AI (certain kinds of accessibility tech) and a huge pile of things that just suck now. The internet by comparison feels like a clear net positive to me, even with all the bad it enables.

pixl97 4 hours ago | parent | next [-]

Here's the thing with AI, especially as it becomes more AGI like, it will encompass all human behaviors. This will lead to the bad behaviors becoming especially noticeable since bad actors quickly realized this is a force multiplication factor for them.

This is something everyone needs to think about when discussing AI safety. Even ANI applications carry a lot of potential societal risks and they may not be immediately evident. I know with the information superhighway few expected it to turn into a dopamine drip feed for advertising dollars, yet here we are.

ethbr1 an hour ago | parent [-]

> bad actors quickly realized this is a force multiplication factor for them

You'd think we would have learned this lesson in failing to implement email charges that net'd to $0 for balanced send/receive patterns. And thereby heralded in a couple decades of spam, only eventually solved by centralization (Google).

Driving the cost of anything valuable to zero inevitably produces an infinite torrent of volume.

n8cpdx 3 hours ago | parent | prev [-]

You’re way off base. It can also create sexually explicit pictures of men.

taurath 4 hours ago | parent | prev | next [-]

> I get that this is satire, but satire has to have some basis in truth.

Do you think that it isn't used for this? The satire part is to expand that usecase to say it exists purely for that purpose.

ajkjk 4 hours ago | parent | prev | next [-]

if you make a thing and the thing is going to be inevitably used for a purpose and you could do something about that use and you do not --- then yes, it exists for that purpose, and you are responsible for it being used in that way. you don't get to say "ah well who could have seen this inevitable thing happening? it's a shame nobody could do anything about it" when it was you that could have done something about it.

jychang 4 hours ago | parent | next [-]

Yeah. Example: stripper poles. Or hitachi magic wands.

Those poles WERE NOT invented for strippers/pole dancers. Ditto for the hitachis. Even now, I'm pretty sure more firemen use the poles than strippers. But that doesn't stop the association from forming. That doesn't make me not feel a certain way if I see a stripper pole or a hitachi magic wand in your living room.

pluralmonad 4 hours ago | parent | next [-]

I'm super confused what harms come from stripper poles and vibrators. I am prepared to accept that the joke might have gone right over my head.

blibble 3 hours ago | parent | prev | next [-]

how many front rooms have you walked into that had a stripper pole?

(also: what city? for a friend...)

wizardforhire 4 hours ago | parent | prev [-]

To be fair to the magic wands, thats why “massagers” were invented in the first place. [1] [2] [3]

[1] https://thefactbase.com/the-vibrator-was-invented-in-1869-to...

[2] https://archive.nytimes.com/www.nytimes.com/books/first/m/ma...

[3] https://en.wikipedia.org/wiki/Female_hysteria

anonymars 4 hours ago | parent | prev [-]

> you...could have done something about it

What is it that isn't being done here, and who isn't doing it?

drzaiusx11 4 hours ago | parent | prev | next [-]

Training a model on sound data from readily available public social network posts and targeting their followers (which on say fb would include family and is full of "olds") isn't a very far fetched use-case for AI. I've created audio models used as audiobook narrators where you can trivially make a "frantic/panicked" voice clip saying "help it's [grandson], I'm in jail and need bail. Send money to [scammer]"

If it's not happening yet, it will...

evandrofisico 4 hours ago | parent | next [-]

It is happening already, recently Brazilian woman living in Italy was scammed thinking she was having an online relationship with Brazilian tiktoker, the scammers created a fake profile and were sending her audio messages with the voice of said tiktoker cloned via AI. She sent the scammers a lot of money for the wedding but when she arrived in Brazil discovered the con.

bandrami 3 hours ago | parent | prev [-]

It's already happening in India. Voicefakes are working unnervingly well and it's amplified by the fact that old people who had very little exposure to tech have basically been handed a smart phone that has control of their pension fund money in an app.

rgmerk 4 hours ago | parent | prev | next [-]

My hypothesis: Generative AI is, in part, reaping the reaction that cryptocurrency sowed.

mrnaught 4 hours ago | parent | prev | next [-]

>> enabled by the internet, does the internet exist for this purpose? Of course not.

I think point article was trying to make: LLMs and new genAI tools helped the scammers scale their operations.

ryan_lane 5 hours ago | parent | prev | next [-]

Scammers are using AI to copy the voice of children and grandchildren, and make calls urgently asking to send money. It's also being used to scam businesses out of money in similar ways (copying the voice of the CEO or CFO, urgently asking for money to be sent).

Sure, the AI isn't directly doing the scamming, but it's supercharging the ability to do so. You're making a "guns don't kill people, people do" argument here.

seizethecheese 5 hours ago | parent | next [-]

Not at all. I’m saying AI doesn’t exist to scam elderly, which is saying nothing about whether it’s dangerous in that respect.

only-one1701 4 hours ago | parent | next [-]

Perhaps you’ve heard that the purpose of a system is what it does?

the_snooze 4 hours ago | parent | next [-]

Exactly this. These systems are supposed to have been built by some of the smartest scientific and engineering minds on the planet, yet they somehow failed (or chose not) to think about second-order effects and what steady-state outcomes their systems will have. That's engineering 101 right there.

jacquesm 4 hours ago | parent [-]

That's because they were thinking about their stock options instead.

rcxdude 4 hours ago | parent | prev | next [-]

This phrase almost always seems to be invoked to attribute purpose (and more specifically, intent and blame) to something based on outcomes, where it should be more considered as a way to stop thinking in terms of those things in the first place.

irjustin 4 hours ago | parent | prev [-]

In broad strokes - disagree.

This is the knife-food vs knife-stab vs gun argument. Just because you can cook with a hammer doesn't make it its purpose.

solid_fuel 4 hours ago | parent [-]

> Just because you can cook with a hammer doesn't make it its purpose.

If you survey all the people who own a hammer and ask what they use it for, cooking is not going to make the list of top 10 activities.

If you look around at what LLMs are being used for, the largest spaces where they have been successfully deployed are astroturfing, scamming, and helping people break from reality by sycophantically echoing their users and encouraging psychosis.

pixl97 4 hours ago | parent | next [-]

I do mean this is a pretty piss poor example.

Email, by number of emails attempted to send is owned by spammers 10 to 100 fold over legitimate emails. You typically don't see this because of a massive effort by any number of companies to ensure that spam dies before it shows up in your mailbox.

To go back one step farther porn was one of the first successful businesses on the internet, that is more than enough motivation for our more conservative congress members to ban the internet in the first place.

paulryanrogers 3 hours ago | parent [-]

Email volume is mostly robots fighting robots these days.

Today if we could survey AI contact with humans, I'm afraid the top 3 by a wide margin would be scams, cheating, deep fakes, and porn.

christianqchung 4 hours ago | parent | prev | next [-]

Is it possible that these are in the top 10, but not the top 5? I'm pretty sure programming, email/meeting summaries, cheating on homework, random QA, and maybe roleplay/chat are the most popular uses.

jacquesm 4 hours ago | parent [-]

The number of programmers in the world is vastly outnumbered by the people that do not program. Email / meeting summaries: maybe. Cheating on homework: maybe not your best example.

only-one1701 4 hours ago | parent | prev [-]

I was going to reply to the post above but you said it perfectly.

wk_end 4 hours ago | parent | prev | next [-]

No one - neither the author of the article nor anyone reading - believes that Sam Altman sat down at his desk one fine day in 2015 and said to himself, “Boy, it sure would be nice if there were a better way to scam the elderly…”

username223 3 hours ago | parent [-]

An no one believes that Sam Altman thinks of much more than adding to his own wealth and power. His first idea was a failing location data-harvesting app that got bought. Others have included biometric data-harvesting with a crypto spin, and this. If there's a throughline beyond manipulative scamming, I don't see it.

burnto 4 hours ago | parent | prev | next [-]

Fair, but it’s an exaggerated statement that’s supposed to clue us into the tone of the piece with a chuckle. Maybe even a snicker or giggle! It’s not worth dissecting for accuracy.

NicuCalcea 4 hours ago | parent | prev [-]

I can't think of many other reasons to create voice cloning AI, or deepfake AI (other than porn, of course).

rgmerk 3 hours ago | parent [-]

There are legitimate applications - fixing a tiny mistake in the dialogue in a movie in the edit suite, for instance.

Do these legitimate applications justify making these tools available to every scammer, domestic abuser, child porn consumer, and sundry other categories of criminal? Almost certainly not.

criley2 5 hours ago | parent | prev [-]

Sure, phones aren't directly doing the scamming, but they're supercharging the ability to do so.

Phones are also a very popular mechanism for scamming businesses. It's tough to pull off CEO scams without text and calls.

Therefore, phones are bad?

This is of course before we talk about what criminals do with money, making money truly evil.

only-one1701 4 hours ago | parent | next [-]

Without phones, we couldn’t talk to people across great distances (oversimplification but you get it).

Without Generative AI, we couldn’t…?

shepherdjerred 4 hours ago | parent [-]

Are you really implying that generative AI doesn't enable things that were not previously possible?

Larrikin 4 hours ago | parent | next [-]

It's actually a fair question. There are software projects I wouldn't have taken on without an LLM. Not because I couldn't make it. But because of the time needed to create it.

I could have taken the time to do the math to figure out what the rewards structure is for my Wawa points and compare it to my car's fuel tank to discover I should strictly buy sandwiches and never gas.

People have been making nude celebrity photos for decades now with just Photoshop.

Some activities have gotten a speed up. But so far it was all possible before just possibly not feasible.

jamiek88 4 hours ago | parent | prev | next [-]

Name some then! I initially scoffed too but I can only think of stuff LLM’s make easier not things that were impossible previously.

pixl97 3 hours ago | parent [-]

Isn't that the vast majority of products? By making things easier they change the scale it is accomplished at? Farming wasn't previously impossible before the tractor.

People seemingly have some very odd views on products when it comes to AI.

freejazz 3 hours ago | parent | prev | next [-]

> were not previously possible?

How obtuse. The poster is saying they don't enable anything of value.

solid_fuel 4 hours ago | parent | prev | next [-]

Can you name one thing generative AI enables that wasn't previously possible?

4 hours ago | parent | next [-]
[deleted]
pixl97 3 hours ago | parent | prev [-]

Can you name one thing a plow enables that wasn't previously possible?

This line of thinking is ridiculous.

queenkjuul 4 hours ago | parent | prev [-]

For the most part, it hasn't. What do you consider previously impossible, and how is it good for the world?

JumpCrisscross 5 hours ago | parent | prev [-]

> Therefore, phones are bad?

Phones are utilities. AI companies are not.

gosub100 5 hours ago | parent | prev | next [-]

It doesn't exist for that express purpose, but the voice and video impersonation is definitely being used to scam elderly people.

Instead of being used to protect us or make our lives easier, it is being used by evildoers to scam the weak and vulnerable. None of the AI believers will do anything about it because it kills their vibe.

JumpCrisscross 5 hours ago | parent [-]

> the voice and video impersonation is definitely being used to scam elderly people

And like with the child pornography, the AI companies are engaging in high-octane buck passing more than actually trying to tamp down the problem.

solid_fuel 5 hours ago | parent | prev | next [-]

LLMs are fiction machines. All they can do is hallucinate, and sometimes the hallucinations are useful. That alone rules them out, categorically, from any critical control loop.

After you eliminate anything that requires accountability and trustworthiness from the tasks which LLMs may be responsibly used for, the most obvious remaining use-cases are those built around lying:

- advertising

- astroturfing

- other forms of botting

- scamming old people out of their money

echelon 4 hours ago | parent | next [-]

It's easily doubled my productivity as an engineer.

As a filmmaker, my friends and I are getting more and more done as well:

https://www.youtube.com/watch?v=tAAiiKteM-U

https://www.youtube.com/watch?v=oqoCWdOwr2U

As long as humans are driving, I see AI as an exoskeleton for productivity:

https://github.com/storytold/artcraft (this is what I'm making)

It's been tremendously useful for me, and I've never been so excited about the future. The 2010's and 2020's of cellphone incrementalism and social media platformization of the web was depressing. These models and techniques are actually amazing, and you can apply these techniques to so many problems.

I genuinely want robots. I want my internet to be filtered by an agent that works for me. I want to be able to leverage Hollywood grade VFX and make shows and transform my likeness for real time improv.

Apart from all the other madness in the world, this is the one thing that has been a dream come true.

As long as these systems aren't owned by massive monopolies, we can disrupt the large companies of the world and make our own place. No more nepotism in Hollywood, no more working as a cog in the labyrinth of some SaaS company - you can make your own way.

There's financial capital and there's labor capital. AI is a force multiplier for labor capital.

navigate8310 4 hours ago | parent | next [-]

> I want to be able to leverage Hollywood grade VFX and make shows and transform my likeness for real time improv.

While i certainly respect your interactivity and subsequent force multiplayer nature of AI, this doesn't mean you should try to emulate an already given piece of work. You'll certainly gain a small dopamine when you successfully copy something but it would also atrophy your critical skills and paralyze you from making any sort of original art. You'll miss out on discovering the feeling of any frontier work that you can truly call your own.

blks 4 hours ago | parent | prev | next [-]

So instead of actually making films, thing you as a filmmaker supposedly like to do, you have some chat bot to do it for you? Or what part of that is generated by chat bot?

Claims of productive boosts must always be inspected very carefully, as they are often perceived, and reality may be the opposite (eg spending more time wrestling the tools), or creating unmaintainable debt, or making someone else spend extra time to review the PR and make 50 comments.

echelon 4 hours ago | parent [-]

> So instead of actually making films, thing you as a filmmaker supposedly like to do, you have some chat bot to do it for you? Or what part of that is generated by chat bot?

There's no chatbot. You can use image-to-image, ControlNets, LoRAs, IPAdapters, inpainting, outpainting, workflows, and a lot of other techniques and tools to mold images as if they were clay.

I use a lot of 3D blocking with autoregressive editing models to essentially control for scene composition, pose, blocking, camera focal length, etc.

Here's a really old example of what that looks like (the models are a lot better at this now) :

https://www.youtube.com/watch?v=QYVgNNJP6Vc

There are lots of incredibly talented folks using Blender, Unreal Engine, Comfy, Touch Designer, and other tools to interface with models and play them like an orchestra - direct them like a film auteur.

heliumtera an hour ago | parent [-]

there is probably more tools to achieve this level of productivity than real humans interested in consuming this goyslop

gllmariuty 4 hours ago | parent | prev | next [-]

> AI is a force multiplier for labor capital

for an 2011 account that's a shockingly naive take

yes, AI is a labor capital multiplier. and the multiplicand is zero

hint: soon you'll be competing not with humans without AI, but with AIs using AIs

Terr_ 3 hours ago | parent [-]

Even if it's >1, it doesn't follow that it's good news for the "labor capitalist".

"OK, so I lost my job, but even adjusting for that, I can launch so many more unfinished side-projects per hour now!"

heliumtera an hour ago | parent | prev | next [-]

always good to be in the pick and shovel biz

jacquesm 4 hours ago | parent | prev | next [-]

As a rule real creativity blossoms under constraints, not under abundance.

echelon 3 hours ago | parent [-]

Trying to make a dent in the universe while we metabolize and oxidize our telomeres away is a constraint.

But to be more in the spirit of your comment, if you've used these systems at all, you know how many constraints you bump into on an almost minute to minute basis. These are not magical systems and they have plenty of flaws.

Real creativity is connecting these weird, novel things together into something nobody's ever seen before. Working in new ways that are unproven and completely novel.

queenkjuul 4 hours ago | parent | prev [-]

Genuine question: does the agent work for you if you didn't build it, train it, or host it?

It's ostensibly doing things you asked it, but in terms dictated by its owner.

blibble 4 hours ago | parent [-]

indeed

and it's even worse than that: you're literally training your replacement by using it when it re-transmits what you're accepting/discarding

and you're even paying them to replace you

ajross 5 hours ago | parent | prev [-]

> [...] are fiction machines. All they can do is hallucinate, and sometimes the hallucinations are useful. That alone rules them out, categorically, from any critical control loop.

True, but no more true than it is if you replace the antecedent with "people".

Saying that the tools make mistakes is correct. Saying that (like people) they can never be trained and deployed such that the mistakes are tolerable is an awfully tall order.

History is paved with people who got steamrollered by technology they didn't think would ever work. On a practical level AI seems very median in that sense. It's notable only because it's... kinda creepy, I guess.

solid_fuel 4 hours ago | parent | next [-]

> True, but no more true than it is if you replace the antecedent with "people".

Incorrect. People are capable of learning by observation, introspection, and reasoning. LLMs can only be trained by rote example.

Hallucinations are, in fact, an unavoidable property of the technology - something which is not true for people. [0]

[0] https://arxiv.org/abs/2401.11817

TheOtherHobbes 4 hours ago | parent | next [-]

The suggestion that hallucinations are avoidable in humans is quite a bold claim.

CamperBob2 4 hours ago | parent | prev [-]

What you (and the authors) call "hallucination," other people call "imagination."

Also, you don't know very many people, including yourself, if you think that confabulation and self-deception aren't integral parts of our core psychological makeup. LLMs work so well because they inherit not just our logical thinking patterns, but our faults and fallacies.

blibble 3 hours ago | parent [-]

what I call it is "buggy garbage"

it's not a person, it doesn't hallucinate or have imagination

it's simply unreliable software, riddled with bugs

CamperBob2 20 minutes ago | parent [-]

(Shrug) Perhaps other sites beckon.

fao_ 4 hours ago | parent | prev | next [-]

> Saying that (like people) they can never be trained and deployed such that the mistakes are tolerable is an awfully tall order.

It is, though. We have numerous studies on why hallucinations are central to the architecture, and numerous case studies by companies who have tried putting them in control loops! We have about 4 years of examples of bad things happening because the trigger was given to an LLM.

ajross 4 hours ago | parent [-]

> We have numerous studies on why hallucinations are central to the architecture,

And we have tens of thousands of years of shared experience of "People Were Wrong and Fucked Shit Up". What's your point?

Again, my point isn't that LLMs are infallible; it's that they only need to be better than their competition, and their competition sucks.

TheOtherHobbes 4 hours ago | parent [-]

It's a fine line. Humans don't always fuck shit up.

But human systems that don't fuck shit up are short-lived, rare, and fragile, and they've only become a potential - not a reality - in the last century or so.

The rest of history is mostly just endless horrors, with occasional tentative moments of useful insight.

5 hours ago | parent | prev [-]
[deleted]
2 hours ago | parent | prev | next [-]
[deleted]
awesome_dude 5 hours ago | parent | prev | next [-]

I think that maybe the point isn't that the scams/distrust are "new" with the advent of AI, but "easier" and "more polished" than before.

The language of the reader is no longer a serious barrier/indicator of a scam (A real bank would never talk like that, is now, well, that's something they would say, the way that they would say it)

techblueberry 2 hours ago | parent | prev | next [-]

Porn was enabled by the internet’s but does the internet exist for this purpose?

Yes. Yes it does. That is the satire.

wat10000 3 hours ago | parent | prev | next [-]

They're used for scams. Isn't that the basis in truth you're looking for in satire?

Before this we had "the internet is for porn." Same sort of exaggerated statement.

ryanobjc 5 hours ago | parent | prev | next [-]

I mean... explain sora.

internet101010 4 hours ago | parent [-]

Revolutionizing cat memes

popalchemist 41 minutes ago | parent | prev | next [-]

While the employees of the companies that make AI may have noble, even humanity-redeeming/saving intentions, the billionaire class absolutely has bond-villain level intentions. The destruction of the middle class and the removal of all livable-wage jobs is absolutely part of the techno-feudalist playbook that Trump, Altman, Zuckerberg, etc are intentionally moving toward. I'd say that is a scam. They want to recreate the conditions of earlier society - an upper class (them, who own the entire means of production and can operate the entire machine without the need for peons' input) who does whatever they want because the lower class is incapable of opposing them.

If you aren't familiar, look into it.

GoodJokes 21 minutes ago | parent | prev | next [-]

[dead]

some_furry 5 hours ago | parent | prev | next [-]

[flagged]

ameliaquining 5 hours ago | parent [-]

The person you're replying to is probably not personally a major AI magnate.

thegrim000 5 hours ago | parent | next [-]

You mean the guy that has in his bio "YC and VC backed founder" and has made multiple posts in the last couple months dismissing different negative thoughts about AI? Yeah that guy probably doesn't have significant funds tied up in the success of AI.

seizethecheese 5 hours ago | parent | next [-]

I don’t, actually, unless you call index funds “tied up”.

To be honest, it’s really distasteful to make a high level comment about this article then have people rush to attack me personally. This is the mentality of a mob.

Barrin92 4 hours ago | parent | next [-]

in this case a more appropriate term for the mob is "the people" because one defining dynamic of the rollout of this technology is that a minority of people seem to be extremely invested to shove it into the faces of a majority of people who don't want it, and then claim that they are visionaries and everyone else is 'the mob'.

Just like with Mark Zuckerberg's "Metaverse" we're now in a post-market vanity economy where not consumer demand but increasingly desperate founders, investors and gurus are trying to justify their valuations by doling out products for free and shoving their AI services into everything to justify the tens of billions they dumped into it

I'm sorry that some people's pension funds, startup funding and increasingly the entire American economy rests on this collective delusion but it's not really most people's problem

Brian_K_White 4 hours ago | parent | prev [-]

One thing this characterization is not is honest.

seizethecheese 4 hours ago | parent [-]

What part is not honest?

shimman 5 hours ago | parent | prev [-]

It becomes insulting when they think we're this foolish.

some_furry 5 hours ago | parent | prev | next [-]

No, but the attitude is congruent, even if they don't have the investment money lying around to fill the shoes exactly.

5 hours ago | parent | prev [-]
[deleted]
5 hours ago | parent | prev | next [-]
[deleted]
gllmariuty 5 hours ago | parent | prev | next [-]

article forgot to mention the usual "think about the water usage"

Retric 5 hours ago | parent | next [-]

What’s the point of attacking a straw man while ignoring the actual points being brought up?

The water usage by data centers is fairly trivial in most places. The water use manufacturing the physical infrastructure + electricity generation is surprisingly large but again mostly irrelevant. Yet modern ‘AI’ has all sorts of actual problems.

seizethecheese 5 hours ago | parent | prev | next [-]

It mentions ecological destruction, which I must say is way better than water usage, AI is a power hog after all.

rootnod3 5 hours ago | parent | prev [-]

If it's the "usual reply", maybe it's because....I dunno...water is kinda important?

queenkjuul 4 hours ago | parent [-]

I'm also not convinced the HN refrain of "it's actually not that much water" is entirely true. I've seen conflicting reports from sources i generally trust, and it's no secret an all-GPU AI data center is more resource intensive than a general purpose data center.

vitajex 2 hours ago | parent | prev [-]

> satire has to have some basis in truth

In order to be funny at least!