Remix.run Logo
Please don't say mean things about the AI I just invested a billion dollars in(mcsweeneys.net)
503 points by randycupertino 4 hours ago | 221 comments
seizethecheese 4 hours ago | parent | next [-]

> There’s an extremely hurtful narrative going around that my product, a revolutionary new technology that exists to scam the elderly and make you distrust anything you see online, is harmful to society

The article is certainly interesting as yet another indicator of the backlash against AI, but I must say, “exists to scam the elderly” is totally absurd. I get that this is satire, but satire has to have some basis in truth.

I say this as someone whose father was scammed out of a lot of money, so I’m certainly not numb to potential consequences there. The scams were enabled by the internet, does the internet exist for this purpose? Of course not.

muvlon 3 hours ago | parent | next [-]

The article names a lot of other things that AI is being used for besides scamming the elderly, such as making us distrust everything we see online, generating sexually explicit pictures of women without their consent, stealing all kinds of copyrighted material, driving autonomous killer drones and more generally sucking the joy out of everything.

And I think I'm inclined to agree. There are a small amount of things that have gotten better due to AI (certain kinds of accessibility tech) and a huge pile of things that just suck now. The internet by comparison feels like a clear net positive to me, even with all the bad it enables.

n8cpdx 2 hours ago | parent | next [-]

You’re way off base. It can also create sexually explicit pictures of men.

pixl97 2 hours ago | parent | prev [-]

Here's the thing with AI, especially as it becomes more AGI like, it will encompass all human behaviors. This will lead to the bad behaviors becoming especially noticeable since bad actors quickly realized this is a force multiplication factor for them.

This is something everyone needs to think about when discussing AI safety. Even ANI applications carry a lot of potential societal risks and they may not be immediately evident. I know with the information superhighway few expected it to turn into a dopamine drip feed for advertising dollars, yet here we are.

taurath 3 hours ago | parent | prev | next [-]

> I get that this is satire, but satire has to have some basis in truth.

Do you think that it isn't used for this? The satire part is to expand that usecase to say it exists purely for that purpose.

ajkjk 3 hours ago | parent | prev | next [-]

if you make a thing and the thing is going to be inevitably used for a purpose and you could do something about that use and you do not --- then yes, it exists for that purpose, and you are responsible for it being used in that way. you don't get to say "ah well who could have seen this inevitable thing happening? it's a shame nobody could do anything about it" when it was you that could have done something about it.

jychang 3 hours ago | parent | next [-]

Yeah. Example: stripper poles. Or hitachi magic wands.

Those poles WERE NOT invented for strippers/pole dancers. Ditto for the hitachis. Even now, I'm pretty sure more firemen use the poles than strippers. But that doesn't stop the association from forming. That doesn't make me not feel a certain way if I see a stripper pole or a hitachi magic wand in your living room.

pluralmonad 3 hours ago | parent | next [-]

I'm super confused what harms come from stripper poles and vibrators. I am prepared to accept that the joke might have gone right over my head.

blibble 2 hours ago | parent | prev | next [-]

how many front rooms have you walked into that had a stripper pole?

(also: what city? for a friend...)

wizardforhire 3 hours ago | parent | prev [-]

To be fair to the magic wands, thats why “massagers” were invented in the first place. [1] [2] [3]

[1] https://thefactbase.com/the-vibrator-was-invented-in-1869-to...

[2] https://archive.nytimes.com/www.nytimes.com/books/first/m/ma...

[3] https://en.wikipedia.org/wiki/Female_hysteria

anonymars 3 hours ago | parent | prev [-]

> you...could have done something about it

What is it that isn't being done here, and who isn't doing it?

drzaiusx11 3 hours ago | parent | prev | next [-]

Training a model on sound data from readily available public social network posts and targeting their followers (which on say fb would include family and is full of "olds") isn't a very far fetched use-case for AI. I've created audio models used as audiobook narrators where you can trivially make a "frantic/panicked" voice clip saying "help it's [grandson], I'm in jail and need bail. Send money to [scammer]"

If it's not happening yet, it will...

bandrami 2 hours ago | parent | next [-]

It's already happening in India. Voicefakes are working unnervingly well and it's amplified by the fact that old people who had very little exposure to tech have basically been handed a smart phone that has control of their pension fund money in an app.

evandrofisico 2 hours ago | parent | prev [-]

It is happening already, recently Brazilian woman living in Italy was scammed thinking she was having an online relationship with Brazilian tiktoker, the scammers created a fake profile and were sending her audio messages with the voice of said tiktoker cloned via AI. She sent the scammers a lot of money for the wedding but when she arrived in Brazil discovered the con.

vitajex 21 minutes ago | parent | prev | next [-]

> satire has to have some basis in truth

In order to be funny at least!

rgmerk 2 hours ago | parent | prev | next [-]

My hypothesis: Generative AI is, in part, reaping the reaction that cryptocurrency sowed.

mrnaught 2 hours ago | parent | prev | next [-]

>> enabled by the internet, does the internet exist for this purpose? Of course not.

I think point article was trying to make: LLMs and new genAI tools helped the scammers scale their operations.

ryan_lane 3 hours ago | parent | prev | next [-]

Scammers are using AI to copy the voice of children and grandchildren, and make calls urgently asking to send money. It's also being used to scam businesses out of money in similar ways (copying the voice of the CEO or CFO, urgently asking for money to be sent).

Sure, the AI isn't directly doing the scamming, but it's supercharging the ability to do so. You're making a "guns don't kill people, people do" argument here.

seizethecheese 3 hours ago | parent | next [-]

Not at all. I’m saying AI doesn’t exist to scam elderly, which is saying nothing about whether it’s dangerous in that respect.

only-one1701 3 hours ago | parent | next [-]

Perhaps you’ve heard that the purpose of a system is what it does?

the_snooze 3 hours ago | parent | next [-]

Exactly this. These systems are supposed to have been built by some of the smartest scientific and engineering minds on the planet, yet they somehow failed (or chose not) to think about second-order effects and what steady-state outcomes their systems will have. That's engineering 101 right there.

jacquesm 2 hours ago | parent [-]

That's because they were thinking about their stock options instead.

irjustin 3 hours ago | parent | prev | next [-]

In broad strokes - disagree.

This is the knife-food vs knife-stab vs gun argument. Just because you can cook with a hammer doesn't make it its purpose.

solid_fuel 3 hours ago | parent [-]

> Just because you can cook with a hammer doesn't make it its purpose.

If you survey all the people who own a hammer and ask what they use it for, cooking is not going to make the list of top 10 activities.

If you look around at what LLMs are being used for, the largest spaces where they have been successfully deployed are astroturfing, scamming, and helping people break from reality by sycophantically echoing their users and encouraging psychosis.

pixl97 2 hours ago | parent | next [-]

I do mean this is a pretty piss poor example.

Email, by number of emails attempted to send is owned by spammers 10 to 100 fold over legitimate emails. You typically don't see this because of a massive effort by any number of companies to ensure that spam dies before it shows up in your mailbox.

To go back one step farther porn was one of the first successful businesses on the internet, that is more than enough motivation for our more conservative congress members to ban the internet in the first place.

paulryanrogers an hour ago | parent [-]

Email volume is mostly robots fighting robots these days.

Today if we could survey AI contact with humans, I'm afraid the top 3 by a wide margin would be scams, cheating, deep fakes, and porn.

christianqchung 3 hours ago | parent | prev | next [-]

Is it possible that these are in the top 10, but not the top 5? I'm pretty sure programming, email/meeting summaries, cheating on homework, random QA, and maybe roleplay/chat are the most popular uses.

jacquesm 2 hours ago | parent [-]

The number of programmers in the world is vastly outnumbered by the people that do not program. Email / meeting summaries: maybe. Cheating on homework: maybe not your best example.

only-one1701 3 hours ago | parent | prev [-]

I was going to reply to the post above but you said it perfectly.

rcxdude 3 hours ago | parent | prev [-]

This phrase almost always seems to be invoked to attribute purpose (and more specifically, intent and blame) to something based on outcomes, where it should be more considered as a way to stop thinking in terms of those things in the first place.

wk_end 3 hours ago | parent | prev | next [-]

No one - neither the author of the article nor anyone reading - believes that Sam Altman sat down at his desk one fine day in 2015 and said to himself, “Boy, it sure would be nice if there were a better way to scam the elderly…”

username223 2 hours ago | parent [-]

An no one believes that Sam Altman thinks of much more than adding to his own wealth and power. His first idea was a failing location data-harvesting app that got bought. Others have included biometric data-harvesting with a crypto spin, and this. If there's a throughline beyond manipulative scamming, I don't see it.

burnto 3 hours ago | parent | prev | next [-]

Fair, but it’s an exaggerated statement that’s supposed to clue us into the tone of the piece with a chuckle. Maybe even a snicker or giggle! It’s not worth dissecting for accuracy.

NicuCalcea 2 hours ago | parent | prev [-]

I can't think of many other reasons to create voice cloning AI, or deepfake AI (other than porn, of course).

rgmerk 2 hours ago | parent [-]

There are legitimate applications - fixing a tiny mistake in the dialogue in a movie in the edit suite, for instance.

Do these legitimate applications justify making these tools available to every scammer, domestic abuser, child porn consumer, and sundry other categories of criminal? Almost certainly not.

criley2 3 hours ago | parent | prev [-]

Sure, phones aren't directly doing the scamming, but they're supercharging the ability to do so.

Phones are also a very popular mechanism for scamming businesses. It's tough to pull off CEO scams without text and calls.

Therefore, phones are bad?

This is of course before we talk about what criminals do with money, making money truly evil.

only-one1701 3 hours ago | parent | next [-]

Without phones, we couldn’t talk to people across great distances (oversimplification but you get it).

Without Generative AI, we couldn’t…?

shepherdjerred 3 hours ago | parent [-]

Are you really implying that generative AI doesn't enable things that were not previously possible?

Larrikin 3 hours ago | parent | next [-]

It's actually a fair question. There are software projects I wouldn't have taken on without an LLM. Not because I couldn't make it. But because of the time needed to create it.

I could have taken the time to do the math to figure out what the rewards structure is for my Wawa points and compare it to my car's fuel tank to discover I should strictly buy sandwiches and never gas.

People have been making nude celebrity photos for decades now with just Photoshop.

Some activities have gotten a speed up. But so far it was all possible before just possibly not feasible.

jamiek88 3 hours ago | parent | prev | next [-]

Name some then! I initially scoffed too but I can only think of stuff LLM’s make easier not things that were impossible previously.

pixl97 2 hours ago | parent [-]

Isn't that the vast majority of products? By making things easier they change the scale it is accomplished at? Farming wasn't previously impossible before the tractor.

People seemingly have some very odd views on products when it comes to AI.

freejazz 2 hours ago | parent | prev | next [-]

> were not previously possible?

How obtuse. The poster is saying they don't enable anything of value.

solid_fuel 3 hours ago | parent | prev | next [-]

Can you name one thing generative AI enables that wasn't previously possible?

pixl97 2 hours ago | parent [-]

Can you name one thing a plow enables that wasn't previously possible?

This line of thinking is ridiculous.

queenkjuul 2 hours ago | parent | prev [-]

For the most part, it hasn't. What do you consider previously impossible, and how is it good for the world?

JumpCrisscross 3 hours ago | parent | prev [-]

> Therefore, phones are bad?

Phones are utilities. AI companies are not.

gosub100 3 hours ago | parent | prev | next [-]

It doesn't exist for that express purpose, but the voice and video impersonation is definitely being used to scam elderly people.

Instead of being used to protect us or make our lives easier, it is being used by evildoers to scam the weak and vulnerable. None of the AI believers will do anything about it because it kills their vibe.

JumpCrisscross 3 hours ago | parent [-]

> the voice and video impersonation is definitely being used to scam elderly people

And like with the child pornography, the AI companies are engaging in high-octane buck passing more than actually trying to tamp down the problem.

solid_fuel 3 hours ago | parent | prev | next [-]

LLMs are fiction machines. All they can do is hallucinate, and sometimes the hallucinations are useful. That alone rules them out, categorically, from any critical control loop.

After you eliminate anything that requires accountability and trustworthiness from the tasks which LLMs may be responsibly used for, the most obvious remaining use-cases are those built around lying:

- advertising

- astroturfing

- other forms of botting

- scamming old people out of their money

echelon 3 hours ago | parent | next [-]

It's easily doubled my productivity as an engineer.

As a filmmaker, my friends and I are getting more and more done as well:

https://www.youtube.com/watch?v=tAAiiKteM-U

https://www.youtube.com/watch?v=oqoCWdOwr2U

As long as humans are driving, I see AI as an exoskeleton for productivity:

https://github.com/storytold/artcraft (this is what I'm making)

It's been tremendously useful for me, and I've never been so excited about the future. The 2010's and 2020's of cellphone incrementalism and social media platformization of the web was depressing. These models and techniques are actually amazing, and you can apply these techniques to so many problems.

I genuinely want robots. I want my internet to be filtered by an agent that works for me. I want to be able to leverage Hollywood grade VFX and make shows and transform my likeness for real time improv.

Apart from all the other madness in the world, this is the one thing that has been a dream come true.

As long as these systems aren't owned by massive monopolies, we can disrupt the large companies of the world and make our own place. No more nepotism in Hollywood, no more working as a cog in the labyrinth of some SaaS company - you can make your own way.

There's financial capital and there's labor capital. AI is a force multiplier for labor capital.

navigate8310 3 hours ago | parent | next [-]

> I want to be able to leverage Hollywood grade VFX and make shows and transform my likeness for real time improv.

While i certainly respect your interactivity and subsequent force multiplayer nature of AI, this doesn't mean you should try to emulate an already given piece of work. You'll certainly gain a small dopamine when you successfully copy something but it would also atrophy your critical skills and paralyze you from making any sort of original art. You'll miss out on discovering the feeling of any frontier work that you can truly call your own.

gllmariuty 3 hours ago | parent | prev | next [-]

> AI is a force multiplier for labor capital

for an 2011 account that's a shockingly naive take

yes, AI is a labor capital multiplier. and the multiplicand is zero

hint: soon you'll be competing not with humans without AI, but with AIs using AIs

Terr_ 2 hours ago | parent [-]

Even if it's >1, it doesn't follow that it's good news for the "labor capitalist".

"OK, so I lost my job, but even adjusting for that, I can launch so many more unfinished side-projects per hour now!"

blks 3 hours ago | parent | prev | next [-]

So instead of actually making films, thing you as a filmmaker supposedly like to do, you have some chat bot to do it for you? Or what part of that is generated by chat bot?

Claims of productive boosts must always be inspected very carefully, as they are often perceived, and reality may be the opposite (eg spending more time wrestling the tools), or creating unmaintainable debt, or making someone else spend extra time to review the PR and make 50 comments.

echelon 3 hours ago | parent [-]

> So instead of actually making films, thing you as a filmmaker supposedly like to do, you have some chat bot to do it for you? Or what part of that is generated by chat bot?

There's no chatbot. You can use image-to-image, ControlNets, LoRAs, IPAdapters, inpainting, outpainting, workflows, and a lot of other techniques and tools to mold images as if they were clay.

I use a lot of 3D blocking with autoregressive editing models to essentially control for scene composition, pose, blocking, camera focal length, etc.

Here's a really old example of what that looks like (the models are a lot better at this now) :

https://www.youtube.com/watch?v=QYVgNNJP6Vc

There are lots of incredibly talented folks using Blender, Unreal Engine, Comfy, Touch Designer, and other tools to interface with models and play them like an orchestra - direct them like a film auteur.

jacquesm 2 hours ago | parent | prev | next [-]

As a rule real creativity blossoms under constraints, not under abundance.

echelon an hour ago | parent [-]

Trying to make a dent in the universe while we metabolize and oxidize our telomeres away is a constraint.

But to be more in the spirit of your comment, if you've used these systems at all, you know how many constraints you bump into on an almost minute to minute basis. These are not magical systems and they have plenty of flaws.

Real creativity is connecting these weird, novel things together into something nobody's ever seen before. Working in new ways that are unproven and completely novel.

queenkjuul 2 hours ago | parent | prev [-]

Genuine question: does the agent work for you if you didn't build it, train it, or host it?

It's ostensibly doing things you asked it, but in terms dictated by its owner.

blibble 2 hours ago | parent [-]

indeed

and it's even worse than that: you're literally training your replacement by using it when it re-transmits what you're accepting/discarding

and you're even paying them to replace you

ajross 3 hours ago | parent | prev [-]

> [...] are fiction machines. All they can do is hallucinate, and sometimes the hallucinations are useful. That alone rules them out, categorically, from any critical control loop.

True, but no more true than it is if you replace the antecedent with "people".

Saying that the tools make mistakes is correct. Saying that (like people) they can never be trained and deployed such that the mistakes are tolerable is an awfully tall order.

History is paved with people who got steamrollered by technology they didn't think would ever work. On a practical level AI seems very median in that sense. It's notable only because it's... kinda creepy, I guess.

solid_fuel 3 hours ago | parent | next [-]

> True, but no more true than it is if you replace the antecedent with "people".

Incorrect. People are capable of learning by observation, introspection, and reasoning. LLMs can only be trained by rote example.

Hallucinations are, in fact, an unavoidable property of the technology - something which is not true for people. [0]

[0] https://arxiv.org/abs/2401.11817

TheOtherHobbes 2 hours ago | parent | next [-]

The suggestion that hallucinations are avoidable in humans is quite a bold claim.

CamperBob2 3 hours ago | parent | prev [-]

What you (and the authors) call "hallucination," other people call "imagination."

Also, you don't know very many people, including yourself, if you think that confabulation and self-deception aren't integral parts of our core psychological makeup. LLMs work so well because they inherit not just our logical thinking patterns, but our faults and fallacies.

blibble 2 hours ago | parent [-]

what I call it is "buggy garbage"

it's not a person, it doesn't hallucinate or have imagination

it's simply unreliable software, riddled with bugs

fao_ 3 hours ago | parent | prev [-]

> Saying that (like people) they can never be trained and deployed such that the mistakes are tolerable is an awfully tall order.

It is, though. We have numerous studies on why hallucinations are central to the architecture, and numerous case studies by companies who have tried putting them in control loops! We have about 4 years of examples of bad things happening because the trigger was given to an LLM.

ajross 2 hours ago | parent [-]

> We have numerous studies on why hallucinations are central to the architecture,

And we have tens of thousands of years of shared experience of "People Were Wrong and Fucked Shit Up". What's your point?

Again, my point isn't that LLMs are infallible; it's that they only need to be better than their competition, and their competition sucks.

TheOtherHobbes 2 hours ago | parent [-]

It's a fine line. Humans don't always fuck shit up.

But human systems that don't fuck shit up are short-lived, rare, and fragile, and they've only become a potential - not a reality - in the last century or so.

The rest of history is mostly just endless horrors, with occasional tentative moments of useful insight.

techblueberry an hour ago | parent | prev | next [-]

Porn was enabled by the internet’s but does the internet exist for this purpose?

Yes. Yes it does. That is the satire.

awesome_dude 3 hours ago | parent | prev | next [-]

I think that maybe the point isn't that the scams/distrust are "new" with the advent of AI, but "easier" and "more polished" than before.

The language of the reader is no longer a serious barrier/indicator of a scam (A real bank would never talk like that, is now, well, that's something they would say, the way that they would say it)

wat10000 an hour ago | parent | prev | next [-]

They're used for scams. Isn't that the basis in truth you're looking for in satire?

Before this we had "the internet is for porn." Same sort of exaggerated statement.

ryanobjc 3 hours ago | parent | prev | next [-]

I mean... explain sora.

internet101010 2 hours ago | parent [-]

Revolutionizing cat memes

gllmariuty 4 hours ago | parent | prev [-]

article forgot to mention the usual "think about the water usage"

Retric 4 hours ago | parent | next [-]

What’s the point of attacking a straw man while ignoring the actual points being brought up?

The water usage by data centers is fairly trivial in most places. The water use manufacturing the physical infrastructure + electricity generation is surprisingly large but again mostly irrelevant. Yet modern ‘AI’ has all sorts of actual problems.

seizethecheese 4 hours ago | parent | prev | next [-]

It mentions ecological destruction, which I must say is way better than water usage, AI is a power hog after all.

rootnod3 4 hours ago | parent | prev [-]

If it's the "usual reply", maybe it's because....I dunno...water is kinda important?

queenkjuul 2 hours ago | parent [-]

I'm also not convinced the HN refrain of "it's actually not that much water" is entirely true. I've seen conflicting reports from sources i generally trust, and it's no secret an all-GPU AI data center is more resource intensive than a general purpose data center.

quantum_state 3 hours ago | parent | prev | next [-]

Viewed from historical perspective, big tech is really reaping the benefits of the intellectual wealth accumulated over many thousands of years by humanity collectively. This should be recognized to find a better path forward.

mediaman 2 hours ago | parent | next [-]

How? They are all losing tens of billions of dollars on this, so far.

Open source models are available at highly competitive prices for anyone to use and are closing the gap to 6-8 months from frontier proprietary models.

There doesn't appear to be any moat.

This criticism seems very valid against advertising and social media, where strong network effects make dominant players ultra-wealthy and act like a tax, but the AI business looks terrible, and it appears that most benefits are going to accrue fairly broadly across the economy, not to a few tech titans.

NVIDIA is the one exception to that, since there is a big moat on their business, but not clear how long that will last either.

TheColorYellow 2 hours ago | parent | next [-]

I'm not so sure thats correct. The Labs seem to offer the best overall products in addition to the best models. And requirements for models are only going to get more complex and stringent going forward. So yes, open source will be able to keep up from a pure performance standpoint, but you can imagine a future state where only licensed models are able to be used in commercial settings and licensing will require compliance against limiting subversive use or similar (e.g. sexualization of minors, doesn't let you make a bomb etc.).

When the market shifts to a more compliance-relevant world, I think the Labs will have a monopoly on all of the research, ops, and production know-how required to deliver. That's not even considering if Agents truly take off (which will then place a premium on the servicing of those agents and agent environments rather than just the deployment).

There's a lot of assumptions in the above, and the timelines certainly vary, so its far from a sure thing - but the upside definitely seems there to me.

cj 44 minutes ago | parent | next [-]

If that's the case, the winner will likely be cloud providers (AWS, GCP, Azure) who do compliance and enterprise very well.

If Open Source can keep up from a pure performance standpoint, any one of these cloud providers should be able to provide it as a managed service and make money that way.

Then OpenAI, Anthropic, etc end up becoming product companies. The winner is who has the most addictive AI product, not who has the most advanced model.

tru3_power an hour ago | parent | prev [-]

What’s the purpose of licensing requiring though things though if someone could just use an open source model to do that anyway? If someone were going to do those things you mentioned why do it through some commercial enterprise tool? I can see maybe licensing requiring a certain level of hardening to prevent prompt injections, but ultimately it still really comes down to how much power you give the model in whatever context it’s operating in.

gizmodo59 2 hours ago | parent | prev | next [-]

Nvda is not the only exception. Private big names are losing money but there are so many public companies seeing the time of their life. Power, materials, dram, storage to name a few. The demand is truly high.

What we can argue about is if AI is truly transforming lives of everyone, the answer is a no. There is a massive exaggeration of benefits. The value is not ZERO. It’s not 100. It’s somewhere in between.

CrossVR 35 minutes ago | parent [-]

I believe that eventually the AI bubble will evolve in a simple scheme to corner the compute market. If no one can afford high-end hardware anymore then the companies who hoarded all the DRAM and GPUs can simply go rent seeking by selling the computer back to us at exorbitant prices.

mikestorrent 27 minutes ago | parent | next [-]

The demand for memory is going to result in more factories and production. As long as demand is high, there's still money to be made in going wide to the consumer market with thinner margins.

What I predict is that we won't advance in memory technology on the consumer side as quickly. For instance, a huge number of basic consumer use cases would be totally fine on DDR3 for the next decade. Older equipment can produce this; so it has value, and we may see platforms come out with newer designs on older fabs.

Chiplets are a huge sign of growth in that direction - you end up with multiple components fabbed on different processes coming together inside one processor. That lets older equipment still have a long life and gives the final SoC assembler the ability to select from a wide range of components.

https://www.openchipletatlas.org/

digiown 29 minutes ago | parent | prev [-]

That makes no sense. If the bubble bursts, there will be a huge oversupply and the prices will fall. Unless all Micron, Samsung, Nvidia, AMD, etc all go bankrupt overnight, the prices won't go up when demand vanishes.

charcircuit 24 minutes ago | parent [-]

There is a massive undersupply of compute right now for the current level of AI. The bubble bursting doesn't fix that.

charcircuit an hour ago | parent | prev | next [-]

>losing tens of billions

They are investing 10s of billions.

bigstrat2003 20 minutes ago | parent | next [-]

They are wasting tens of billions on something that has no business value currently, and may well never, just because of FOMO. That's not what I would call an investment.

bandrami 43 minutes ago | parent | prev [-]

They are washing 10s of billions of dollars an an industry-wide attempt to keep the music playing

gruez an hour ago | parent | prev | next [-]

>Open source models are available at highly competitive prices for anyone to use and are closing the gap to 6-8 months from frontier proprietary models.

What happens when the AI bubble is over and developers of open models doesn't want to incinerate money anymore? Foundation models aren't like curl or openssl. You can't have maintain it with a few engineer's free time.

compounding_it an hour ago | parent | next [-]

Training is really cheap compared to the basically free inference being handed out by openai Anthropic Google etc.

Spending a million dollars on training and giving the model for free is far cheaper than hundreds of millions of dollars spent on inference every month and charging a few hundred thousand for it.

mikestorrent 25 minutes ago | parent [-]

Not sure I totally follow. I'd love to better understand why companies are open sourcing models at all.

edoceo an hour ago | parent | prev [-]

If the bubble is over all the built infrastructure would become cheaper to train on? So those open models would incenerate less? Maybe there is an increase of specialist models?

Like after dot-com the leftovers were cheap - for a time - and became valuable (again) later.

bandrami 41 minutes ago | parent [-]

No, if the bubble ends the use of all that built infrastructure stops being subsidized by an industry-wide wampum system where money gets "invested" and "spent" by the same two parties.

fHr 40 minutes ago | parent | prev | next [-]

The other side of the market:

yowlingcat 2 hours ago | parent | prev | next [-]

I agree with your point and it is to that point I disagree with GP. These open weight models which have ultimately been constructed from so many thousands of years of humanity are also now freely available to all of humanity. To me that is the real marvel and a true gift.

ulfw an hour ago | parent | prev [-]

It's turning out to be a commodity product. Commodity products are a race to the bottom on price. That's how this AI bubble will burst. The investments can't possibly show the ROIs envisioned.

As an LLM I use whatever is free/cheapest. Why pay for ChatGPT if Copilot comes with my office subscription? It does the same thing. If not I use Deepseek or Qwen and get very similar results.

Yes if you're a developer on Claude Code et al I get a point. But that's few people. The mass market is just using chat LLMs and those are nothing but a commodity. It's like jumping from Siri to Alexa to whatever the Google thing is called. There are differences but they're too small to be meaningful for the average user

derektank 2 hours ago | parent | prev | next [-]

Isn’t the reason we have a public domain so that people aren’t in a perpetual debt to their intellectual forebears?

gruez an hour ago | parent [-]

Copyrights last a very long time. Moreover nothing says it has to be open. The recipe to coke is still secret.

bandrami 39 minutes ago | parent | next [-]

The recipe to Coca Cola is not copyrighted (recipes in general can't be) but is protected by trade secret laws, which can notionally last forever.

The recipe also isn't that much of a secret, they read it on the air on a This American Life episode and the Coca Cola spokesperson kind of shrugged it off because you'd have to clone an entire industrial process to turn that recipe into a recognizable Coke.

edoceo an hour ago | parent | prev | next [-]

A recent copy-cat

https://www.reddit.com/r/CopyCatRecipes/comments/1qbbo6d/coc...

daveguy an hour ago | parent | prev [-]

The recipe of coke is not a copyright, it is a trade secret. Trade secrets can remain indefinitely if you can keep it secret. Copyrights are "open" by their nature.

gruez an hour ago | parent [-]

In the context of this discussion though, what makes you think openai can't keep theirs a trade secret?

daveguy 44 minutes ago | parent [-]

I was agreeing it could last a very long time, even longer that copyright. But specifically because it is not copyright. But as an AI model, it just won't have value for very long. Models are dated within a 6 months and obsolete in 2 years. IP around development may last longer.

justarandomname 3 hours ago | parent | prev | next [-]

yeah, but zero chance of that happening unfortunately.

pear01 2 hours ago | parent [-]

well practiced cynicism is boring.

imo there are actually too few answers for what a better path would even look like.

hard to move forward when you don't know where you want to go. answers in the negative are insufficient, as are those that offer little more than nostalgia.

smallmancontrov 2 hours ago | parent | next [-]

It's interesting that the prosperity maximum of both the United States and China happened at "market economy kept in line with a firm hand" even though we approached it from different directions (left and right respectively) and in the US case reversed course.

We could use another Roosevelt.

stemlord 2 hours ago | parent | prev | next [-]

people have been pretty clear about a positive path forward

- big tech should pay for the data they extract and sell back to us

- startups should stop forcing ai features that no one wants down our throats

- the vanguard of ai should be open and accessible to all not locked in the cloud behind paywalls

FridayoLeary 2 hours ago | parent | prev | next [-]

But op is frankly absurd. It sounds reasonable for about 1 second before you think about it. What sets tech apart from every other area of human innovation? And why limit it to that? What about mineral exploitation? Oil etc.

It's just not a well thought out comment. If we focus on the "better path forward", the entrance to which is only unlocked by the realisation that big techs achievements (and thus, profits) belong to humanity collectively... After we reach this enlightened state, what does op believe the first couple of things a traveller on this path is likely to encounter (beyond Big Techs money, which incidentally we take loads of already in the form of taxes, just maybe not enough)?

_DeadFred_ an hour ago | parent [-]

Tech is the most set apart area of innovation ever.

First you have tech's ability to scale. The ability to scale also has it creep new changes/behaviors into every aspect of our lives faster than any 'engine for change' could previous.

Tech also inherits, so you can treat it as legos using, what are we at, definitely tens, maybe hundreds of thousands of human years of work, of building blocks to build on top of. Imagine if you started every house with a hundred thousand human years of labor already completed instantly. No other domain in human history accumulates tens of millions of skilled human years annually and allows so much of that work to stack, copy, and propagate at relatively low cost.

And tech's speed of iteration is insane. You can try something, measure it, change it, and redeploy in hours. Unprecedented experimentation on a mass scale leading to quicker evolution.

It's so disingenuous to have tech valuations as high as they are based on these differentiations but at the same time say 'tech is just like everything from the past and must not be treated differently, and it must be assumed outcomes from it are just like historical outcomes'. No it is a completely different beast, and the differences are becoming more pronounced as the above 10Xs over and over.

greesil 2 hours ago | parent | prev [-]

Well practiced criticism of cynicism is boring

relaxing 2 hours ago | parent | prev | next [-]

What should?

FridayoLeary 2 hours ago | parent | prev | next [-]

Sounds like you just want some of their money.

triceratops 2 hours ago | parent | next [-]

Yes, especially since they're talking about wiping about most or all white-collar jobs in our lifetimes. What's wrong with that?

FridayoLeary 2 hours ago | parent [-]

Why drag your dead ancestors into the debate?

On that note they say oil is dead dinosaurs, maybe have a word with Saudi Arabia...

dekhn 2 hours ago | parent | next [-]

Oil comes from algae (and other tiny marine organisms) not dinosaurs.

triceratops 2 hours ago | parent | prev [-]

Was this reply intended for a different comment? Or do I need more sleep?

blactuary 2 hours ago | parent | prev | next [-]

If they want to abandon noblesse oblige we can certainly go back to the old way of evening things out. Their choice

mackeye an hour ago | parent | prev [-]

some would say their money is our money via the ltv :-)

mrwaffle 2 hours ago | parent | prev [-]

Is this technically a form of retroactive mind rape? If so, at least we have the right oligarchic friends experienced in this running the big show. (Apologies if I just any broke rules here).

mrwaffle an hour ago | parent [-]

This seems to be a touchy subject for YC people with 500+ karma. Not a repudiation but an 'invisible hand' downvote to avoid a response or exposure of an opinion. My ancestors fought in the revolutionary war and like them, I'll die on this very subtle rolling hill of a question. I loved you all as brothers, this may be the end for mrwaffle.

Gene5ive 2 hours ago | parent | prev | next [-]

Up Next: A McSweeney's article where McSweeney's takes the debates about it on Hacker News as seriously as Hacker News takes McSweeney's: way too much

selimthegrim an hour ago | parent [-]

This has the potential to be another /g/ ITT we HN now

jaybyrd 4 hours ago | parent | prev | next [-]

guys were just trying to take jobs away from you.... please stop being mean to us - richest people on earth 2026

donkey_brains 2 hours ago | parent | next [-]

Today a manager at my work asked all his teams including mine “please write up a report on how many engineers from your teams we could replace with AI”.

Surprisingly, the answer he got was “none, because that’s not how AI works”.

Guess we’ll see if that registers…

MobiusHorizons an hour ago | parent | next [-]

I would love to have responded something like “only one: yours”

But in all seriousness, ai does a pretty good job at impersonating VPs. It’s confidently wrong and full of hope for the future.

consumer451 an hour ago | parent | prev | next [-]

I use various agentic dev tools all day long, mostly with Opus. The tools are vary capable now, but when planning mid-complexity features, I find the time estimates hilarious.

Phase 1: 4 days

Phase 2: 3 days

Phase 3: 2 days

4 hours later, all the work is done and tested.

The funny part to me was that if I had an AI true believer boss, I would report those time estimates directly, and have a lot of time to do other stuff.

ziml77 18 minutes ago | parent | next [-]

Human time estimates are bad, but the ones that AI gives are just absurd. I've seen them used from small things like planning interviews and short presentations, all the way up to large scale projects. In no case do they make any sense to me. But I think people end up trusting them because they look so confident and well planned due to how the AIs break things down.

whattheheckheck an hour ago | parent | prev [-]

When youre the boss telling kids how to work what time esti.ates will you believe?

Tis the cycle

sublinear 2 hours ago | parent | prev [-]

All of them because cost cutting is a red flag in business regardless of what year it is.

GolfPopper 4 hours ago | parent | prev | next [-]

You forgot... "by stealing from artists and writers at scale".

jacquesm 4 hours ago | parent | next [-]

You forgot about 'open source contributors' and 'musicians'.

dylan604 2 hours ago | parent | next [-]

these two groups are used to having their stuff stolen way more than the groups GP listed, so in a way kind of appropriate to have been omitted.

soulofmischief 3 hours ago | parent | prev | next [-]

As an open source contributor and musician who is not rich, I am pretty stoked about the engineering, scientific and mathematical advancements being made in my lifetime.

I have only become more creatively enabled when adopting these tools, and while I share the existential dread of becoming unemployable, I also am wearing machine-fabricated clothing and enjoying a host of other products of automation.

I do not have selective guilt over modern generative tools because I understand that one day this era will be history and society will be as integrated with AI as we are with other transformative technologies.

overgard 2 hours ago | parent | next [-]

Well, if you consider Maslow's hierarchy of needs, "creatively enabled" would be a luxury at the top of the pyramid with "self actualization". Luxuries don't matter if the things at the bottom of the pyramid aren't there -- i.e. you can't eat or put a shelter over your head. I think the big AI players really need a coherent plan for this if they don't want a lot of mainstream and eventually legislative pushback. Not to mention it's bad business if nobody can afford to use AI because they're unemployed. (I'm not anti-AI, it's an interesting tool, but I think the way it's being developed is inviting a lot of danger for very marginal returns so far)

jacquesm 2 hours ago | parent [-]

> I think the big AI players really need a coherent plan for this if they don't want a lot of mainstream and eventually legislative pushback.

That's by far not the worst that could happen. There could very well be an axe attached to the pendulum when it swings back.

> Not to mention it's bad business if nobody can afford to use AI because they're unemployed.

In that sense this is the opposite of the Ford story: the value of your contribution to the process will approach zero so that you won't be able to afford the product of your work.

johnnyanmac 3 hours ago | parent | prev | next [-]

> I also am wearing machine-fabricated clothing and enjoying a host of other products of automation.

I'm not really a fan of the "you criticize society yet you participate in it" argument.

>I understand that one day this era will be history and society will be as integrated with AI as we are with other transformative technologies.

You seem to forget the blood shed over the history that allowed that tech to benefit the people over just the robber barons. Unimaginable amounts of people died just so we could get a 5 day workweek and minimum wage.

We don't get a benficial future by just laying down and letting the people with the most perverse incentives decide the terms. The very least you can do is not impede those trying to fight for those futures if you can't/don't want to fight yourself.

Wyverald 2 hours ago | parent [-]

>> I also am wearing machine-fabricated clothing and enjoying a host of other products of automation.

> I'm not really a fan of the "you criticize society yet you participate in it" argument.

It seems to me that GP is merely recognizing the parts of technological advance that they do find enjoyable. That's rather far from the "I am very intelligent" comic you're referencing.

> The very least you can do is not impede those trying to fight for those futures if you can't/don't want to fight yourself.

Just noting that GP simply voiced their opinion, which IMHO does not constitute "impedance" of those trying to fight for those futures.

johnnyanmac 2 hours ago | parent [-]

>GP is merely recognizing the parts of technological advance that they do find enjoyable.

Machine fabrication is nice. Machine fabrication from sweatshop children in another country is not enjoyable. That's the exact nuance missing from their comment.

>GP simply voiced their opinion, which IMHO does not constitute "impedance" of those trying to fight for those futures.

I'd hope we'd understand since 2024 that we're in an attention society, and this is a very common tactic used to disenfranchise people from engaging in action against what they find unfair. Enforcing a feeling of inevitability is but one of many methods.

Intentionally or not, language like this does impede the efforts.

nozzlegear 3 hours ago | parent | prev | next [-]

As an open source maintainer, I'm not stoked and I feel pretty much the opposite way. I've only become more annoyed when trying to adopt these tools, and felt more creative and more enabled by reducing their usage and going back to writing code by hand the old fashioned way. AI's only been useful to me as a commit message writer and a rubber duck.

> I do not have selective guilt over modern generative tools because I understand that one day this era will be history and society will be as integrated with AI as we are with other transformative technologies.

This seems overly optimistic, but also quite dystopian. I hope that society doesn't become as integrated with these shitty AIs as we are with other technologies.

blibble 3 hours ago | parent | prev | next [-]

> I understand that one day this era will be history and society will be as integrated with AI as we are with other transformative technologies

I'd rather be dead than a cortex reaver[1]

(and I suspect as I'm not a billionaire, the billionare owned killbots will make sure of that)

[1]: https://www.youtube.com/watch?v=1egtkzqZ_XA

callc 3 hours ago | parent | prev [-]

You can say the same thing as we invented the atomic bomb.

Cool science and engineering, no doubt.

Not paying any attention to societal effects is not cool.

Plus, presenting things as inevitabilities is just plain confidently trying to predict the future. Anyone can san “I understand one day this era will be history and X will have happened”. Nobody knows how the future will play out. Anyone who says they do is a liar. If they actually knew then go ahead and bet all your savings on it.

peyton 2 hours ago | parent [-]

I dunno, I take a more McLuhan-esque view. We’re not here to save the world every single time repeatedly.

TheDong 2 hours ago | parent | prev [-]

You're saying "musicians" aren't "artists", and "open source contributors" aren't artists _or_ writers? Artists covers both of the groups you said.

jacquesm 2 hours ago | parent [-]

Yes, we're all artists. Good now?

Mars008 an hour ago | parent | prev | next [-]

The picture will be incomplete if we don't mention that those 'artists and writers' are using the results at scale.

malfist 4 hours ago | parent | prev | next [-]

Techbros trying to replace wage theft as the largest $ crime in the US

tsunamifury 2 hours ago | parent | prev | next [-]

Something something… great artist steal.

jaybyrd 4 hours ago | parent | prev [-]

well if all the talent is stolen and put into our water destruction machine we can make significantly worse and more expensive versions of just giving the job to a wagey

logicprog 2 hours ago | parent [-]

The AI water issue is bunk. https://andymasley.substack.com/p/the-ai-water-issue-is-fake

goalieca 2 hours ago | parent [-]

That article was clearly AI generated. I read pages of it and still didn’t see any actual data. Just different phrasing’s of that claim.

logicprog 24 minutes ago | parent [-]

What are you talking about? He goes into plenty of data, domain relevant definitions, specific cases, etc? Links to reliable sources for every numbers claim, of which there's several per paragraph, shows graphs, pictures, and does a lot of math (all of which I manually checked myself on paper as I went through). Also, the writing style is very much not ChatGPT-like, especially with all of the very honest corrections and edits he's added over time, which an AI slop purveyor wouldn't do.

The deep analysis starts at this section: https://andymasley.substack.com/p/the-ai-water-issue-is-fake...

You can't just dismiss anything you don't like as AI.

pesus 4 hours ago | parent | prev [-]

On one hand, we're actively destroying society, but on the other, billionaires are getting richer! Why are you mad at us!?

Sharlin 2 hours ago | parent [-]

Sonething something for a brief moment we created a lot of value for the shareholders

Joel_Mckay 18 minutes ago | parent [-]

With -$4.50 revenue per new customer, these gamblers are demonstrably creating an externalized debt for society when it inevitably implodes the market.

Some are projecting >35% drop in the entire index when reality hits the "magnificent" 7. Look at the price of Gold, corporate cash flows, and the US Bonds laggard performance. That isn't normal by any definition. =3

snowwrestler 2 hours ago | parent | prev | next [-]

> As someone who desperately needs this technology to work out, I can honestly say it is the most essential tool ever created in all of human history.

For those having trouble finding the humor, it lies in the vast gulf between grand assertions that LLMs will fundamentally transform every aspect of human life, and plaintive requests to stop saying mean things about it.

As a contrast: truly successful products obviate complaints. Success speaks for itself. In TV, software, e-commerce, statins, ED pills, modern smartphones, social media, etc… winning products went into the black quickly and made their companies shitloads of money (profits). No need to adjust vibes, they could just flip everyone the bird from atop their mountains of cash. (Which can also be pretty funny.)

There are mountains of cash in LLMs today too, but so far they’re mostly on the investment side of the ledger. And industry-wide nervousness about that is pretty easy to discern. Like the loud guy with a nervous smile and a drop of sweat on his brow.

https://youtu.be/wni4_n-Cmj4

So much of the current discourse around AI is the tech-builders begging the rest of the world to find a commercially valuable application. Like the AgentForce commercials that have to stoop to showing Matthew McConaughey suffering the stupidest problems imaginable. Or the OpenAI CFO saying maybe they’ll make money by taking a cut of valuable things their customers come up with. “Maybe someone else will change the world with this, if you’ll all just chill out” is a funny thing to say repeatedly while also asking for $billions and regulatory forbearance.

datsci_est_2015 an hour ago | parent | next [-]

> As a contrast: truly successful products obviate complaints. Success speaks for itself.

Makes me consider: Dotcom domains, Bitcoin, Blockchain, NFTs, the metaverse, generative AI…

Varying degrees of utility. But the common thread is people absolutely begging you to buy in, preying on FOMO.

twoodfin an hour ago | parent | prev [-]

Or maybe McSweeney’s hasn’t been consistently funny for years and years?

snowwrestler an hour ago | parent [-]

McSweeney’s was never consistently funny. This is a good piece though.

i_love_retros an hour ago | parent | prev | next [-]

Today I asked copilot agent a question about a selector in a cypress test and it requested to run a python command in my terminal.

Brajeshwar an hour ago | parent | prev | next [-]

We, humans, will read this and laugh, chuckle, but the AI Overloads will not understand that. This will be added to the training data and become a truth. But what if that is?

gradus_ad 2 hours ago | parent | prev | next [-]

Jensen needs to keep escalating the hype to keep the hoarding dynamics in play. Because that's what's selling GPU's. You can't look at voracious GPU demand as a real signal of AI app profitability or general demand. It's a function of global tech oligarchs with gargantuan cash hoards not wanting to be left behind. But hoarding dynamics are nonlinear through self reinforcment and the moment any hint of limitations of current gen AI crop up spend will collapse.

olivierestsage an hour ago | parent | prev | next [-]

Powerful catharsis in this

stego-tech 2 hours ago | parent | prev | next [-]

Excellent satire, absolutely something I could see in The Onion or Hard Drive as an Op-Ed.

kindawinda an hour ago | parent | prev | next [-]

dumbass article

hedayet 35 minutes ago | parent | prev | next [-]

The same people selling you AI today (AGI tomorrow) were the ones selling remote work yesterday. Then "mandated" everyone back to the office.

Oh, and most of them had a crypto bag too.

<sigh>

Joel_Mckay 13 minutes ago | parent [-]

Most cons can't create actual value, and inevitably must continue to con to survive. It would be called recidivism if they went to prison. =3

lifetimerubyist an hour ago | parent | prev | next [-]

Gotta go back to shoving these nerds into lockers.

vivzkestrel 21 minutes ago | parent | prev | next [-]

- can we please get an article like this dedicated to windows 11?

Joel_Mckay 16 minutes ago | parent [-]

No one smart still uses Windows 11 RAT-ware edition. =3

twochillin an hour ago | parent | prev | next [-]

fully expected this to be about nadella

willturman 28 minutes ago | parent [-]

It is.

akomtu 2 hours ago | parent | prev | next [-]

AI is alien intelligence, really. If biotech created an unusual mold that responds to electric impulses the way LLMs do, we would rightfully declare that this mold has some sort of intelligence and for this reason it is, technically speaking, an alien lifeform. AI is just that intelligent mold, but based on transistors instead of organic cells. Needless to say, it's a bad idea to create a competing lifeform that's smarter than us, regardless of whatever flimsy benefits it might have.

Joel_Mckay 5 minutes ago | parent | next [-]

LLM is not real AI, would take 75% of our galaxy energy to reach human level error rates, and is economically a fiction.... but it doesn't have to be "AGI" to cause real harm.

https://en.wikipedia.org/wiki/Competitive_exclusion_principl...

The damage is already clear =3

https://www.youtube.com/watch?v=TYNHYIX11Pc

https://www.youtube.com/watch?v=yftBiNu0ZNU

https://www.youtube.com/watch?v=t-8TDOFqkQA

20260126032624 an hour ago | parent | prev [-]

Hey, I just wanted to say, big fan of your work on vixra.org

porkloin 4 hours ago | parent | prev | next [-]

I hate LLMs as much as the next guy, but this was honestly just not very funny. Humor can be a great vehicle for criticism when it's done right, but this feels like clickbait-level lazy writing. I wouldn't criticize it anywhere else, but I have enjoyed reading a bunch of actually good writing from mcsweeney's over the years in the actual literary journal and on their website.

Froztnova 3 hours ago | parent | next [-]

It's that brand of humor that isn't really humor anymore because the person writing it is clearly positively seething behind the keyboard and considers the whole affair to be deadly serious.

I've never really been able to get into it either because it's sort of a paradox. If I agree, I feel bad enough about the actual issue that I'm not really in the mood to laugh, and if I disagree then I obviously won't like the joke anyways.

porkloin 3 hours ago | parent [-]

For me I guess I don't really see what it's adding. You can watch an actual video clip of Jensen begging people not to "bully" or say "hurtful" things about AI while wearing a stupid leather jacket. It's a million times funnier to watch him squirm in real life.

I find it unfunny for the same reason I don't find modern SNL intro bits about Trump funny. The source material is already insane to the point that it makes surface-level satire like this feel pointless.

madeofpalk 4 hours ago | parent | prev | next [-]

I think you just don’t like McSweeney’s style.

3 hours ago | parent | prev | next [-]
[deleted]
johnnyanmac 2 hours ago | parent [-]

Like it or not, we're in an attention economy. We've seen that if we aren't loud and brash about it that the adminsitration will happily be loud (and sometimes lie) to push their narrative.

Maybe if we ever return to normal times and also don't let the other 90% of corruption stay where it's been for the past 40 years we can start to ease off the noise.

jaybyrd 4 hours ago | parent | prev | next [-]

i think its a little on the nose but overall def worth reading and funny enough for a chuckle in my opinion

heliumtera 3 hours ago | parent | prev [-]

Agreed, it's almost non satire given how cynical it is. I loved it.

gip 3 hours ago | parent | prev | next [-]

> "immoral technofascist life"

Many people would rather argue about morality and conscience (of our time, of our society) instead of confronting facts and reality. What we see here is a textbook case of that.

socialcommenter 6 minutes ago | parent | next [-]

It's much easier for someone who blurs the facts to keep a clear conscience because they don't have to acknowledge (to themselves) what they've done.

Someone who's clear-eyed about the facts is much more likely to have a guilty conscience/think someone's actions are unconscionable.

I don't mean to argue either side in this discussion, but both sides might be ignoring the facts here.

tdb7893 3 hours ago | parent | prev | next [-]

Is there a reason you seem to view conscience and confronting facts as seemingly opposed things? Also it seems to me like morality and conscience seem important to argue about, with facts just being part of that argument.

SpicyLemonZest 2 hours ago | parent [-]

I think that someone interested in discussing facts would not write the phrase "immoral technofascist life". If I took the discussion at face value, I might respond asking for examples of how e.g. Dario Amodei is a "technofascist", but I think we can agree that would be really obtuse of me.

technofastest 2 hours ago | parent | next [-]

> but I think we can agree that would be really obtuse of me.

I would disagree.

Dario Amodei? You're making it too easy:

- Dario is CEO and Co-Founder of an LLM company ("techno-")

- Said LLM company is working with both public and private security companies

- The policies and actions these organizations are taking, often utilizing the technology provided by said LLM company, reflect that of a far-right, authoritarian, and ultranationalist political ideology characterized by a dictatorial leader, centralized autocracy, militarism, and the forcible suppression of opposition. ("-fascist")

Public:

https://techcrunch.com/2025/01/19/the-pentagon-says-ai-is-sp...

https://www.ft.com/content/e75e3388-4700-413d-ab67-778410c2d...

Private:

https://news.ycombinator.com/item?id=46794365

Wouljya look at all those facts!

datsci_est_2015 an hour ago | parent [-]

No see “facts” are what I use to support my worldview, and what you’ve supplied are arguments, and I can discard your arguments through debate, especially because I believe that they’re founded on your feelings (like a silly “conscience”).

tdb7893 an hour ago | parent | prev [-]

Haha, my experience is people making those sorts of pronouncements will argue literally anything so I definitely wouldn't assume they are uninterested in arguing facts. Though I agree though that arguing with some people is obtuse and you arguing with the original post seems one of those cases.

More my confusion is the person I was responding to complaining about people arguing morality, which seems incredibly important to discuss. Lack of facts obviously makes discussions bad but there's definitely not some dichotomy with discussing morality (at least not with the people I know. My issue has not nearly been as much with people arguing morality, which is often my more productive arguments, and more people with a fundamentally incompatible view on what the facts are).

johnnyanmac 2 hours ago | parent | prev [-]

> instead of confronting facts and reality.

okay, what are the "facts and reality" here? If you're just going to say "AI is here to stay", then you 1) aren't dealing with the core issues people bring up, and 2) aren't brining facts but defeatism. Where would be if we used that logic for, say, Flash?

kshri24 3 hours ago | parent | prev | next [-]

> just use my evil technology

Ridiculous to say the technology, by itself, is evil somehow. It is not. It is just math at the end of the day. Yes you can question the moral/societal implications of said technology (if used in a negative way) but that does not make the technology itself evil.

For example, I hate vibe coding with a passion because it enables wrong usage (IMHO) of AI. I hate how easy it has become to scam people using AI. How easy it is to create disinformation with AI. Hate how violence/corruption etc could be enabled by using AI tools. Does not mean I hate the tech itself. The tech is really cool. You can use the tech for doing good as much as you can use it for destroying society (or at the very minimum enabling and spreading brainrot). You choose the path you want to tread.

Just do enough good that it dwarfs the evil uses of this awesome technology.

budududuroiu 2 hours ago | parent | next [-]

Well, at this moment, the evil things done with technology vastly surpass the good things done with technology.

Democratisation of tech has allowed for more good to happen, centralisation the opposite. AI is probably one of the most centralisation-happy tech we've had in ages.

pixl97 2 hours ago | parent [-]

Centralization of technology has been happening at a rapid pace, and is only a tiny bit the fault of technology itself.

Capitalism demands profits. Competition is bad for profits. Multiple factories are bad for profits. Multiple standards are bad for profits. Expensive workers are bad for profits.

mrnaught 2 hours ago | parent | prev | next [-]

“Just do enough good...”, it is hard to define what is "good". This tech has many dimensions and second-order effects, yet all the tech giants claim it a “net positive” without understanding fully what is unfolding.

robinhoode 2 hours ago | parent | prev | next [-]

If we lived in a sane society, AI would actually be used for good.

AI is literally trained on by humans, used by humans. If humans are doing awful things with it, then it's because humans are awful right now.

I strongly feel this is related to the rise of fascism and wealth inequality.

We need a great conflict like WW2 to release this tension.

wk_end 2 hours ago | parent | prev | next [-]

> It is just math at the end of the day.

Not really - it's math, plus a bazillion jigabytes of data to train that math, plus system prompts to guide that math, plus data centers to do that math, plus nice user interfaces and APIs to interface with that math, plus...

Anyway, it's just kind of a meaninglessly reductive thing to say. What is the atom bomb? It's just physics at the end of the day. Physics can wreck havoc on the world; so can math.

johnnyanmac 2 hours ago | parent | prev [-]

>Nothing either good nor bad but thinking makes it so - Shakespeare

That said, their thinking is that this can remove labor from their production, all while stealing works under the very copyright they setup. So I'd call that "evil" in every conventional sense.

>Just do enough good that it dwarfs the evil uses of this awesome technology.

The evil is in the root of the training, though. And sadly money is not coming from "good". I don't see any models focusing on ensuring it trains only on CC0/FOSS works, so it's hard to argue of any good uses with evil roots.

If they could do that at the bare minimum, maybe they can make the argument over "horses vs cars". As it is now, this is a car powered by stolen horses. (also I work in games, and generative AI is simply trash in quality right now).

pixl97 2 hours ago | parent [-]

Even this has little to do with AI and points right at the capitalist society that already exists. HN really doesn't like to talk about their golden child that let's money flow, but the concentration of wealth and IP by the super wealthy occurred before GenAI was a thing.

This also ignores the broken fucking copyright system that ensures once you create something you get many lifetimes of fucking off without having to work, so if genAI kills that I won't shed a tear.

irishcoffee 3 hours ago | parent | prev | next [-]

It is highly amusing to me that the same ~2,000 people who have the most to gain from LLM success also largely control the media narratives and the vast majority of the global economy.

Someone coined a term for those of the general population who trust this small group of billionaires and defend their technology.

“Dumb fucks”

random_duck 4 hours ago | parent | prev | next [-]

Is this a sign that us of the plebs are starting to grow discontent?

blibble 3 hours ago | parent | next [-]

it's certainly a change from the "inevitability" vomit the boosters were emitting this time last year

techblueberry an hour ago | parent [-]

Oh, I mean they’re still doing that too:

https://www.darioamodei.com/essay/the-adolescence-of-technol...

blibble 12 minutes ago | parent [-]

oh dear

the whole thing reads as "it's going to be so powerful! give money now!"

heliumtera 4 hours ago | parent | prev [-]

Starting? Society minus those who struggled with css is fully fatigued of AI.

theLegionWithin 4 hours ago | parent | prev | next [-]

nice satire

lovich 2 hours ago | parent | prev | next [-]

The Luddites weren’t anti technological progress, they were anti losing their job and entire way of life with an impolite “get fucked you fucking peasant” message to boot.

I wonder what name the tech bros will come up to call us for the same feeling nowadays.

yoyohello13 an hour ago | parent [-]

They don’t need a new name. They just keep using Luddite.

trhway 3 hours ago | parent | prev | next [-]

Was the article itself written by AI?

zahlman 3 hours ago | parent [-]

McSweeney's is a well known Internet satire site that has been in operation for decades; while there are multiple contributors, the style here seems fairly standard for the site, the author has a submission history going back to at least 2020 and I see no LLM cliches. Suspecting AI here makes about as much sense to me as suspecting it on an arbitrarily selected LWN article.

rednafi 3 hours ago | parent | prev | next [-]

"Oh, it's another tool in your repertoire like Bash" doesn't garner billions of dollars in investment. So they have to address it as the next electricity or the internet, when in its current form, it's much closer to a crypto grift than it is to electricity.

Lerc 3 hours ago | parent | prev | next [-]

Perhaps things would work out better if people didn't say mean things regardless of who it's about.

You can still criticise without being mean.

donkey_brains 2 hours ago | parent | next [-]

Woosh

thinkingtoilet 3 hours ago | parent | prev [-]

Explain how to nicely criticize computer software that allows for the generation of sexually explicit images of children.

Lerc 2 hours ago | parent | next [-]

I'm not sure what you are wanting here, are you actually requiring me to be a bully to affect change?

I can certainly criticize specific things respectfully. If I prioritised demonstrating my moral superiority I could loudly make all sorts of disingenuous claims that won't make the world a better place.

I certainly do not think people should be making exploitative images in Photoshop or indeed any other software.

I do not think that I should be able choose which software those rules apply to based upon my own prejudice. I also do not think that being able to do bad things with something is sufficient to negate every good thing that can be done with it.

Countless people have been harmed by the influence of religious texts, I do not advocate for those to be banned, and I do not demand the vilification of people who follow those texts.

Even though I think some books can be harmful, I do not propose attacking people who make printing presses.

What exactly are you requiring here. Pitchforks and torches? Why AI and not the other software that can be used for the same purposes?

If you want robust regulation that can provide a means to protect people from how models are used then I am totally prepared (and have made submissions to that effect) to work towards that goal. Being antagonistic works against making things better. Crude generalisations convince no-one. I want the world to be better, I will work towards that. I just don't understand how anyone could believe vitriolic behaviour will result in anything good.

chasd00 2 hours ago | parent | prev [-]

Photoshop has been around for a long time.

paodealho 43 minutes ago | parent [-]

And canvases and paint have existed for even longer, but it needs someone skilled to make use of it.

Stable Diffusion enabled the average lazy depraved person to create these images with zero effort, and there's a lot of these people in the world apparently.

bigstrat2003 12 minutes ago | parent [-]

So? At the end of the day, regardless of how skilled one has to be to use it, a tool is not considered morally responsible for how it is used. Nor is the maker of that tool considered morally responsible for how it is used, except in the rare case where the tool only has immoral uses. And that isn't the case here.

mattgreenrocks 2 hours ago | parent | prev | next [-]

It’s wild to me that we both see people like Jensen as great while also tolerating public whining of the sort in the linked article. Don’t get me wrong, there are people who are far worse! But why do we put up with a billionaire whining that people are critical of what they make? At that scale it is guaranteed to have haters. It’s just statistics, man.

daft_pink 3 hours ago | parent | prev [-]

Maybe he shouldn’t have claimed if we could get in a moving vehicle with his ai driving no problem