Remix.run Logo
nikisil80 3 days ago

Yeah sure but have you considered that the actual cost of running these models is actually much greater than whatever cost you might be shelling out for the ad-free apps? You're talking to someone who hates the slopification and enshittification of everything, so you don't need to convince me about that. However, everything I've seen described in the replies to my initial comment - while cute, and potentially helpful on a case-by-case basis, does NOT warrant the amount of resources we are pouring into AI right now. Not even fucking close. It'll all come crashing down, taxpayers the world over will be caught with the bag in their hands, and for what? So that we can all have a less robust version of an app that already exists but that has the colours we want and the button where we want it?

If AI cost nothing and wasn't absolutely decimating our economy, I'd find what you've shared cute. However, we are putting literally all of our eggs, and the next generation's eggs, and the one after that, AND the one after that, into this one thing, which, I'm sorry, is so far away from everything that keeps on being promised to us that I can't help but feel extremely depressed.

falloutx 3 days ago | parent | next [-]

At this point it doesn't matter that much whether we use AI or not, the apps are not selling and they are being produced at an alarming rate.

The projects being submitted to product hunt is 4x the year before.

The market is shrinking rapidly because now more people make their own apps.

Even making a typo and landing on a website, there is good chance its selling more ai snake oil, yet none of these apps are feature complete and easily beaten by apps made by guys in 2010s. (tldr & sketchbook for the drawing space).

Only way to excite the investors is to fake the ARR by giving free trials and sell before the recurring event occurs.

minimaxir 3 days ago | parent | prev [-]

You are attempting to move the goalposts. There are two different points in this debate:

1) Modern LLMs are an inflection point for coding.

2) The current LLM ecosystem is unsustainable.

This submission discussion is only about #1, which #2 does not invalidate. Even if the ecosystem crashes, then open-source LLMs that leverage the same tricks Opus 4.5 does will just be used instead.

strange_quark 3 days ago | parent [-]

But it's only an inflection point if it's sustainable. When this comes crashing down, how many people are going to be buying $70k GPUs to run an open source model?

minimaxir 3 days ago | parent | next [-]

I said open-source models, not locally-hosted models. Essentially, more power to inference-only providers such as Groq and Together AI which host the large-scale OSS LLMs who will be less affected by a crash as long as the demand for coding agents is there.

simonw 3 days ago | parent | prev | next [-]

> When this comes crashing down, how many people are going to be buying $70k GPUs to run an open source model?

If the AI thing does indeed come crashing down I expect there will be a whole lot of second-hand GPUs going for pennies on the dollar.

strange_quark 3 days ago | parent [-]

Ok, and then? Taking a one time discount on a rapidly depreciating asset doesn’t magically make this whole industry profitable, and it’s not like you’re going to start running a GB200 in your basement.

simonw 3 days ago | parent [-]

Then I'll wait for a bunch of companies to spring up running those cheap GPUs in their data centers and selling me access to GLM-4.7 and friends.

Or I'll start one myself, if the market fails to provide!

nikisil80 3 days ago | parent | prev [-]

Checked your history. From a fellow skeptic, I know how hard it is to reason with people around here. You and I need to learn to let it go. In the end, the people at the top have set this up so that either way, they win. And we're down here telling the people at our level to stop feeding the monster, but told to fuck off anyways.

So cool bro, you managed to ship a useless (except for your specific use-case) app to your iphone in an hour :O

What I think this is doing is it's pitting people against the fact that most jobs in the modern economy (mine included btw) are devoid of purpose. This is something that, as a person on the far left, I've understood for a long time. However, a lot (and I mean a loooooot) of people have never even considered this. So when they find that an AI agent is able to do THEIR job for them in a fraction of the time, they MUST understand it as the AI being some finality to human ingenuity and progress given the self-importance they've attributed to themselves and their occupation - all this instead of realizing that, you know, all of our jobs are useless, we all do the exact same useless shit which is extremely easy to replicate quickly (except for a select few occupations) and that's it.

I'm sorry to tell anyone who's reading this with a differing opinion, but if AI agents have proven revolutionary to your job, you produced nothing of actual value for the world before their advent, and still don't. I say this, again, as someone who beyond their PhD thesis (and even then) does not produce anything of value to the world, while being paid handsomely for it.

christophilus 3 days ago | parent | next [-]

> if AI agents have proven revolutionary to your job, you produced nothing of actual value for the world before their advent, and still don't.

This doesn’t logically follow. AI agents produce loads of value. Cotton picking was and still is useful. The cotton gin didn’t replace useless work. It replaced useful work. Same with agents.

strange_quark 3 days ago | parent | prev [-]

> You and I need to learn to let it go.

Definitely, it’s an unhealthy fixation.

> I'm sorry to tell anyone who's reading this with a differing opinion, but if AI agents have proven revolutionary to your job, you produced nothing of actual value for the world before their advent, and still don't.

I agree with this, but I think my take on it is a lot less nihilistic than yours. I think people vastly undersell how much effort they put into doing something, even if that something is vibecoding a slop app that probably exists. But if people are literally prompting claude with a few sentences and getting revolutionary results, then yes, their job was meaningless and they should find something to do that they’re better at.

But what frustrates me the most about this whole hype wave isn’t just that the powers that be have bet the entire economy on a fake technology, it’s that it’s sucking all of of the air out of the room. I think most people’s jobs can actually provide value and there’s so much work to be done to make _real_ progress. But instead of actually improving the world, all the time, money, and energy is being thrown into such a wasteful technology that is actively making the world a worse place. I’m sure it’s always been like this and I was just to naive too see it, but I much preferred it when at least the tech companies pretended they cared about the impact their products had on society rather than simply trying to extract the most value out of the same 5 ideas.

nikisil80 3 days ago | parent [-]

Yeah, I do tend to have a rather nihilistic view on things, so apologies.

I really think we're just cooked at this point. The amount of people (some great friends whom I respect) that have told me in casual conversation that if their LLM were taken from them tomorrow, they wouldn't know how to do their work (or some flavour of that statement) has made me realize how deep the problem is.

We could go on and on about this, but let's both agree to try and look inward more and attempt to keep our own things in order, while most other people get hooked on the absolute slop machine that is AI. Eventually, the LLM providers will need to start ramping up the costs of their subscriptions and maybe then will people start clicking that the shitty code that was generated for their pointless/useless app is not worth the actual cost of inference (which some conservative estimates put out to thousands of dollars per month on a subscription basis). For now, people are just putting their heads in the sand and assuming that physicists will somehow find a way to use quantum computers to speed up inference by a factor of 10^20 in the next years, while simultaneously slashing its costs (lol).

But hey, Opus 4.5 can cook up a functional app that goes into your emails and retrieves all outstanding orders - revolutionary. Definitely worth the many kWh and thousands of liters of water required, eh?

Cheers.

keeda 3 days ago | parent | next [-]

A couple of important points you should consider:

1. The AI water issue is fake: https://andymasley.substack.com/p/the-ai-water-issue-is-fake (This one goes into OCD-levels of detail with receipts to debunk that entire issue in all aspects.)

2. LLMs are far, far more efficient than humans in terms of resource consumption for a given task: https://www.nature.com/articles/s41598-024-76682-6 and https://cacm.acm.org/blogcacm/the-energy-footprint-of-humans...

The studies focus on a single representative task, but in a thread about coding entire apps in hours as opposed to weeks, you can imagine the multiples involved in terms of resource conservation.

The upshot is, generating and deploying a working app that automates a bespoke, boring email workflow will be way, way, wayyyyy more efficient than the human manually doing that workflow everytime.

Hope this makes you feel better!

D-Machine 3 days ago | parent [-]

> 2. LLMs are far, far more efficient than humans in terms of resource consumption for a given task: https://www.nature.com/articles/s41598-024-76682-6 and https://cacm.acm.org/blogcacm/the-energy-footprint-of-humans...

I want to push back on this argument, as it seems suspect given that none of these tools are creating profit, and so require funds / resources that are essentially coming from the combined efforts of much of the economy. I.e. the energy externalities here are monstrous and never factored into these things, even though these models could never have gotten off the ground if not for the massive energy expenditures that were (and continue to be) needed to sustain the funding for these things.

To simplify, LLMs haven't clearly created the value they have promised, but have eaten up massive amounts of capital / value produced by everyone else. But producing that capital had energy costs too. Whether or not all this AI stuff ends up being more energy efficient than people needs to be measured on whether AI actually delivers on its promises and recoups the investments.

EDIT: I.e. it is wildly unclear at this point that if we all pivot to AI that, economy-wide, we will produce value at a lower energy cost, and, even if we grant that this will eventually happen, it is not clear how long that will take. And sure, humans have these costs too, but humans have a sort of guaranteed potential future value, whereas the value of AI is speculative. So comparing energy costs of the two at this frozen moment in time just doesn't quite feel right to me.

keeda 3 days ago | parent | next [-]

These tools may not be turning a profit yet, but as many point out, this is simply due to deeply subsidized free usage to capture market share and discover new use cases.

However, their economic potential is undeniable. Just taking the examples in TFA and this sub-thread, the author was able to create economic value by automating rote aspects of his wife's business and stop paying for existing subscriptions to other apps. TFA doesn't mention what he paid for these tokens, but over the lifetime of his apps I'd bet he captures way more value than the tokens would have cost him.

As for the energy externalities, the ACM article puts some numbers on them. While acknowledging that this is an apples/oranges comparison, it points out that the training cost for GPT-3 (article is from mid-2024) is about 5x the cost of raising a human to adulthood.

Even if you 10x that for GPT-5, that is still only the cost of raising 50 humans to adulthood in exchange for a model that encapsulates a huge chunk of the world's knowledge, which can then be scaled out to an infinite number of tasks, each consuming a tiny fraction of the resources of a human equivalent.

As such, even accounting for training costs, these models are far more efficient than humans for the tasks they do.

nikisil80 2 days ago | parent [-]

I appreciate your responses to my comments, including the addition of reading material. However, I'm going to have to push back on both points.

Firstly, saying that because AI water use is on par with other industries, then we shouldn't scrutinize AI water use is a bit short-sighted. If the future Altman et al want comes to be, the shear scale of deployment of AI-focused data centers will lead to nominal water use orders of magnitude larger than other industries. Of course, on a relative scale, they can be seen as 'efficient', but even something efficient, when built out to massive scale, can suck out all of our resources. It's not AI's fault that water is a limited resource on Earth; AI is not the first industry to use a ton of water; however, eventually, with all other industries + AI combined (again, imagining the future the AI Kings want), we are definitely going 300km/h on the road to worldwide water scarcity. We are currently at a time where we need to seriously rethink our relationship with water as a society - not at a time where we can spawn whole new, extremely consumptive industries (even if, in relative terms, they're on par with what we've been doing (which isn't saying much given the state of the climate)) whose upsides are still fairly debatable and not at all proven beyond a doubt.

As for the second link, there's a pretty easy rebuke to the idea, which aligns with the other reply to your link. Sure, LLMs are more energy-efficient at generating text than human beings, but do LLMs actually create new ideas? Write new things? Any text written by an LLM will be based off of someone else's work. There is a cost to creativity - to giving birth to actual ideas - that LLMs will never be able to incur, which makes them seem more efficient, but in the end they're more efficient at (once again) tasks which us humans have provided them with plenty of examples of (like writing corporate emails! Or fairly cookie-cutter code!) but at some point the value creation is limited.

I know you disagree with me, it's ok - you are in the majority and you can feel good about that.

I honestly hope the future you foresee where LLMs solve our problems and become important building blocks to our society comes to fruition (rather than the financialized speculation tools they currently are, let's be real). If that happens, I'll be glad I was wrong.

I just don't see it happening.

keeda 2 days ago | parent [-]

These are important conversations to have because there is so much hyperbole in both directions that a lot of people end up having strong but misguided opinions. I think it's very helpful to consider the impact of LLMs in context (heheh) of the bigger picture rather than in isolation, because suddenly a lot of things fall into perspective.

For instance, all water use by data centers is a fraction of the water used by golf courses! If it really does comes down to the wire for conserving water, I think humanity has the option of foregoing a leisure activity for the relatively wealthy in exchange for accelerated productivity for the rest of the world.

And totally, LLMs might not be able to come up with new ideas, but they can super-charge the humans who do have ideas and want to develop them! An idea that would have taken months to be explored and developed can now be done in days. And given that like the majority of ideas fail, we would be failing that much faster too!

In either case, just eyeballing the numbers we have currently, on average the resources a human without AI assistance would have consumed to conclude an endeavor far outweighs the resources consumed by both that human and an assisting LLM.

I would agree that there will likely be significant problems caused by widespread adoption of AI, but at this point I think they would social (e.g. significant job displacement, even more wealth inequality) rather than environmental.

ben_w 9 hours ago | parent | prev [-]

> I want to push back on this argument, as it seems suspect given that none of these tools are creating profit, and so require funds / resources that are essentially coming from the combined efforts of much of the economy. I.e. the energy externalities here are monstrous and never factored into these things, even though these models could never have gotten off the ground if not for the massive energy expenditures that were (and continue to be) needed to sustain the funding for these things.

While it is absolutely possible, even plausible, that the economics of these models and providers is the next economic crash in waiting, somewhere between Enron (at worst, if they're knowingly cooking books) or Global Financial Crisis (if they're self-delusional rather than actively dishonest), we do have open-weights models that get hosted for money, that people play with locally if they're rich enough for the beefy machines, and that are not too far behind the SOTA as to suggest a difference in kind.

This all strongly suggests that the resource consumption per token by e.g. Claude Code would be reasonably close to the list price if they weren't all doing a Red Queen race[0], running as hard as they can just to retain relevant against each other's progress, in an all-pay auction[1] where only the best can ever hope to cash anything out and even that may never be enough to cover the spend.

Thing is, automation has basically always done this. It's more of a question of "what tasks can automation actually do well enough to bother with?" rather than "when it can, is it more energy efficient than a human?"

A Raspberry Pi Zero can do basic arithmetic faster than the sum total performance of all 8 billion living humans, even if all the humans had trained hard and reached the level of the current world record holder, for a tenth of the power consumption of just one of those human's brains, or 2% of their whole body. But that's just arithmetic. Stable Diffusion 1.5 had a similar thing, when it came out the energy cost to make a picture on my laptop was comparable with the calories consumed while typing in a prompt for it… but who cares, SD 1.5 had all that Cronenberg anatomy, what matters is when the AI is "good enough" for the tasks against which it is set.

To the extent that Claude Code can replace a human, and the speed at which it operates…

Well, my experiments just before Christmas (which are limited, and IMO flawed in a way likely to overstate the current quality of the AI) say the speed of the $20 plan is about 10 sprints per calendar month, while the quality is now at the level of a junior with 1-3 years experience who is just about to stop being a junior. This means the energy cost per unit of work done is comparable with the energy cost needed to have that developer keep a computer and monitor switched on long enough to do the same unit of work. The developer's own body adds another 100-120 watts to that from biology, even if they're a free-range hippie communist who doesn't believe in money, cooked food, lightbulbs, nor having a computer or refrigerator at home, and who commutes by foot from a yurt with neither AC nor heating, ditto the office.

Where the AI isn't good enough to replace a human, (playing Pokemon and managing businesses?) it's essentially infinitely more expensive (kWh or $) to use the AI.

Still, this does leave a similar argument as with aircraft: really efficient per passenger-kilometre, but they enable so many more passenger-kilometres than before as to still sum to a relevant problem.

[0] https://en.wikipedia.org/wiki/Red_Queen%27s_race

[1] https://en.wikipedia.org/wiki/All-pay_auction

simonw 3 days ago | parent | prev | next [-]

> For now, people are just putting their heads in the sand and assuming that physicists will somehow find a way to use quantum computers to speed up inference by a factor of 10^20 in the next years, while simultaneously slashing its costs (lol).

GPT-3 Da Vinci cost $20/million tokens for both input and output.

GPT-5.2 is $1.75/million for input and $14/million for output

I'd call that pretty strong evidence that they've been able to dramatically increase quality while slashing costs, over just the past ~4 years.

tuesdaynight 3 days ago | parent [-]

Isn't that kind of related with the amount of money thrown at the field? If the economy gets worse for any reason, do you think that we can still expect these level of cutting costs in the future?

strange_quark 3 days ago | parent | prev [-]

> But hey, Opus 4.5 can cook up a functional app that goes into your emails and retrieves all outstanding orders - revolutionary. Definitely worth the many kWh and thousands of liters of water required, eh?

The thing is in a vacuum this stuff is actually kinda cool. But hundreds of billions in debt-financed capex that will never seen a return, and this is the best we’ve got? Absolutely cooked indeed.