| ▲ | What AI Is Really For(chrbutler.com) |
| 85 points by delaugust 3 hours ago | 78 comments |
| |
|
| ▲ | Dilettante_ 2 hours ago | parent | next [-] |
| My experience with AI in the design context tends to reflect what I think is generally true about AI in the workplace: the smaller the use case, the larger the gain.
This might be the money quote, encapsulating the difference between people who say their work benefits from LLMs and those who don't. Expecting it to one-shot your entire module will leave you disappointed, using it for code completion, generating documentation, and small-scale agentic tasks frees you up from a lot of little trivial distractions. |
| |
| ▲ | kulahan 43 minutes ago | parent | next [-] | | An agentic git interface might be nice, though hallucinations seem like they could create a really messy problem. Still, you could just roll back in that case, I suppose. Anyways, it would be nice to tell it where I'm trying to get to and let it figure out how to get there. | |
| ▲ | m463 an hour ago | parent | prev | next [-] | | > frees you up from a lot of little trivial distractions. I think one huge issue in my life has been: getting started If AI helps with this, I think it is worth it. Even if getting started is incorrect, it sparks outrage and an "I'll fix this" momentum. | |
| ▲ | helterskelter an hour ago | parent | prev | next [-] | | Honestly one the best use cases I've found for it is creating configs. It used to be that I was able to spend a week fiddling around with, say, nvim settings. Now I tell an LLM what I want and it basically gives it to me without having to do trial and error, or locating some obscure comment from 2005 that tells me what I need to know. | | |
| ▲ | mattmanser 26 minutes ago | parent [-] | | Depends what you're doing. If it's a less trodden path expect it to hallucinate some settings. Also a regular thing I see is that it adds some random other settings without comment and then when you ask it about them it goes, whoops, yeah, those aren't necessary. |
| |
| ▲ | awesome_dude 2 hours ago | parent | prev [-] | | And bug fixes "This lump of code is producing this behaviour when I don't want to" Is a quick way to find/fix bugs (IME) BUT it requires me to understand the response (sometimes the AI hits the nail on the head, sometimes it says something that makes my brain - that's not it, but now I know exactly what it is |
|
|
| ▲ | sockgrant 2 hours ago | parent | prev | next [-] |
| “As a designer…” IMHO the bleeding edge of what’s working well with LLMs is within software engineering because we’re building for ourselves, first. Claude code is incredible. Where I work, there are an incredible number of custom agents that integrate with our internal tooling. Many make me very productive and are worthwhile. I find it hard to buy in to opinions of non-SWE on the uselessness of AI solely because I think the innovation is lagging in other areas. I don’t doubt they don’t yet have compelling AI tooling. |
| |
| ▲ | ihaveajob 2 hours ago | parent | next [-] | | I'm curious if you could share something about custom agents. I love Claude Code and I'm trying to get it into more places in my workflow, so ideas like that would probably be useful. | | |
| ▲ | verdverm 2 hours ago | parent [-] | | I've been using Google ADK to create custom agents (fantastic SDK). With subagents and A2A generally, you should be able to hook any of them into your preferred agentic interface |
| |
| ▲ | hagbarth 2 hours ago | parent | prev | next [-] | | If you read a little further in the article, the main point is _not_ that AI is useless. But rather than AGI god building, a regular technology. A valuable one, but not infinite growth. | | |
| ▲ | NitpickLawyer 2 hours ago | parent [-] | | > But rather than AGI god building, a regular technology. A valuable one, but not infinite growth. AGI is a lot of things, a lot of ever moving targets, but it's never (under any sane definition) "infinite growth". That's already ASI territory / singularity and all that stuff. I see more and more people mixing the two, and arguing against ASI being a thing, when talking about AGI. "Human level competences" is AGI. Super-human, ever improving, infinite growth - that's ASI. If and when we reach AGI is left for everyone to decide. I sometimes like to think about it this way: how many decades would you have to go back, and ask people from that time if what we have today is "AGI". | | |
| |
| ▲ | hollowturtle 2 hours ago | parent | prev | next [-] | | Where are the products? This site and everywhere around the internet, on x, linkedin and so is full of crazy claims and I have yet to see a product that people need and that actually works. What I'm experiencing is a gigantic enshittification everywhere, Windows sucks, web apps are bloated, slow and uninteresting. Infrastructure goes down even with "memory safe rust" burning millions and millions of compute for scaffolding stupid stuff. Such a disappointment | | |
| ▲ | redorb an hour ago | parent [-] | | I think chatGPT itself is an epic product, Cursor has insane growth and usage. I also think they are both over-hyped, have too much a valuation. | | |
| ▲ | layer8 27 minutes ago | parent | next [-] | | Citing AI software as the only examples of how AI benefits developing software, has a bit of a touch of self-help books describing how to attain success and fulfillment by taking the example of writing self-help books. I don’t disagree that these are useful tools, by the way. I just haven’t seen any discernible uptick in general software quality and utility either, nor any economic uptick that should presumably follow from being able to develop software more efficiently. | |
| ▲ | emp17344 23 minutes ago | parent | prev | next [-] | | It doesn’t matter what you think. Where’s all the data proving that AI is actually valuable? All we have are anecdotes and promises. | |
| ▲ | hollowturtle an hour ago | parent | prev | next [-] | | ChatGPT is... a chat with some "augmentation" feature aka outputting rich html responses, nothing new except the generative side. Cursor is a VSCode fork with a custom model and a very good autocomplete integration. Again where are the products? Where the heck is Windows without the bloat that works reliably before becoming totally agentic? And therefore idiotic since it doesn't work reliably | |
| ▲ | oblio 12 minutes ago | parent | prev [-] | | I agree with everyone else, where is the Microsoft Office competitor created by 2 geeks in a garage with Claude Code? Where is the Exchange replacement created by a company of 20 people? There are many really lucrative markets that need a fresh approach, and AI doesn't seem to have caused a huge explosion of new software created by upstarts. Or am I missing something? Where are the consumer facing software apps developed primarily with AI by smaller companies? I'm excluding big companies because in their case it's impossible to prove the productivity, the could be throwing more bodies at the problem and we'd never know. |
|
| |
| ▲ | muldvarp an hour ago | parent | prev [-] | | > IMHO the bleeding edge of what’s working well with LLMs is within software engineering because we’re building for ourselves, first. How are we building _for_ ourselves when we literally automate away our jobs? This is probably one of the _worst_ things someone could do to me. | | |
| ▲ | DennisP an hour ago | parent [-] | | Software engineers been automating our own work since we built the first assembler. So far it's just made us more productive and valuable, because the demand for software has been effectively unlimited. Maybe that will continue with AI, or maybe our long-standing habit will finally turn against us. | | |
| ▲ | muldvarp 22 minutes ago | parent [-] | | > Software engineers been automating our own work since we built the first assembler. The declared goal of AI is to automated software engineering entirely. This is in no way comparable to building an assembler. So the question is mostly about whether or not this goal will be achieved. Still, nobody is building these systems _for_ me. They're building them to replace me, because my living is too much for them to pay. |
|
|
|
|
| ▲ | corry 2 hours ago | parent | prev | next [-] |
| "The best case scenario is that AI is just not as valuable as those who invest in it, make it, and sell it believe." This is the crux of the OP's argument, adding in that (in the meantime) the incumbents and/or bad actors will use it as a path to intensify their political and economic power. But to me the article fails to: (1) actually make the case that AI's not going to be 'valuable enough' which is a sweeping and bold claim (especially in light of its speed), and; (2) quantify AI's true value versus the crazy overhyped valuation, which is admittedly hard to do - but matters if we're talking 10% of 100x overvalued. If all of my direct evidence (from my own work and life) is that AI is absolutely transformative and multiplies my output substantially, AND I see that that trend seems to be continuing - then it's going to be a hard argument for me to agree with #1 just because image generation isn't great (and OP really cares about that). Higher Ed is in crisis; VC has bet their entire asset class on AI; non-trivial amounts of code are being written by AI at every startup; tech co's are paying crazy amounts for top AI talents... in other words, just because it can't one-shot some complex visual design workflow does not mean (a) it's limited in its potential, or (b) that we fully understand how valuable it will become given the rate of change. As for #2 - well, that's the whole rub isn't it? Knowing how much something is overvalued or undervalued is the whole game. If you believe it's waaaay overvalued with only a limited time before the music stop, then go make your fortune! "The Big Short 2: The AI Boogaloo". |
|
| ▲ | aynyc 2 hours ago | parent | prev | next [-] |
| A bit of sarcasm, but I think it's porn. |
| |
| ▲ | righthand 2 hours ago | parent [-] | | It’s at least about stimulating you to give richer data. Which isn’t quite porn. |
|
|
| ▲ | xeckr 2 hours ago | parent | prev | next [-] |
| The AI race is presumably won by whomever can automate AI R&D first, thus everyone who is in an adjacent field will see the incremental benefits sooner than those further away. The further removed, the harder the takeoff once it happens. |
| |
| ▲ | HarHarVeryFunny an hour ago | parent [-] | | This notion of a hard takeoff, or singularity, based on self-improving AI, is based on the implicit assumption that what's holding AI progress back is lack of AI researchers/developers, which is false. Ideas are a penny a dozen - the bottleneck is the money/compute to test them at scale. What exactly is the scenario you are imagining where more developers at a company like OpenAI (or maybe Meta, which has just laid off 600 of them) would accelerate progress? | | |
| ▲ | xeckr 40 minutes ago | parent [-] | | It's not hard to believe that adding AI researchers to an AI company marginally increases the rate of progress, otherwise why would the companies be clamouring for talent with eye-watering salaries? In any case, I'm not just talking about AI researchers—AGI will not only help with algorithmic efficiency improvements, but will probably make spinning up chip fabs that much easier. |
|
|
|
| ▲ | faceball2000 2 hours ago | parent | prev | next [-] |
| What about surveillance? Lately I've been feeling that is what it's really for. Because our data can be queried in a much more powerful way when it has all been used to train LLMs. |
|
| ▲ | njarboe 2 hours ago | parent | prev | next [-] |
| Many people use AI as the source for knowledge. Even though it is often wrong or misleading, it's advice is better on average than their own judgement or the judgement of people they know. When an AI is "smarter" than 95%? of the population, even if it does not reach superintelligence, will be a very big deal. |
| |
| ▲ | apsurd 2 hours ago | parent | next [-] | | This means to me AI is rocket fuel for our post-truth reality. Post-truth is a big deal and it was already happening pre-AI. AGI, post-scarcity, post-humanity are nerd snipes. Post-truth on the other hand is just a mundane and nasty sociologically problem that we ran head-first into and we don't know how to deal with. I don't have any answers. Seems like it'll get worse before it gets better. | |
| ▲ | emp17344 2 hours ago | parent | prev | next [-] | | How is this different from a less reliable search engine? | |
| ▲ | 2 hours ago | parent | prev [-] | | [deleted] |
|
|
| ▲ | block_dagger 2 hours ago | parent | prev | next [-] |
| > To think that with enough compute we can code consciousness is like thinking that with enough rainbows one of them will have a pot of gold at its end. What does consciousness have to do with AGI or the point(s) the article is trying to make? This is a distraction imo. |
| |
| ▲ | kmnc 2 hours ago | parent [-] | | It’s a funny anology because what’s missing for the rainbows with pots of gold is magic and fairytales…so what’s missing for consciousness is also magic and fairytales? I’ve yet to see any compelling argument for believing enough computer wouldn’t allow us to code consciousness. | | |
| ▲ | apsurd 2 hours ago | parent [-] | | Yes that's just it though, it's a logic argument. "Tell me why we aren't just stochastic parrots!" is more logically sound than "God made us", but that doesn't defacto make it "The correct model of reality". I am suspect that the world is modeled linearly. That physical reality is non-linear is also more logically sound, so why is there such a clear straight line from compute to consciousness? |
|
|
|
| ▲ | exceptione 2 hours ago | parent | prev | next [-] |
| I think this is the best part of the essay: > But then I wonder about the true purpose of AI. As in, is it really for what they say it’s for?
> There is a vast chasm between what we, the users, and them, the investors, are “sold” in AI. We are told that AI will do our tasks faster and better than we can — that there is no future of work without AI. And that is a huge sell, one I’ve spent the majority of this post deconstructing from my, albeit limited, perspective. But they — the people who commit billions toward AI — are sold something entirely different. They are sold AGI, the idea of a transformative artificial intelligence, an idea so big that it can accommodate any hope or fear a billionaire might have. Their billions buy them ownership over what they are told will remake a future world nearly entirely monetized for them. And if not them, someone else. That’s where the fear comes in. It leads to Manhattan Project rationale, where any lingering doubt over the prudence of pursuing this technology is overpowered by the conviction of its inexorability. Someone will make it, so it should be them, because they can trust them.
|
|
| ▲ | Kiro an hour ago | parent | prev | next [-] |
| > it’s a useful technology that is very likely overhyped to the point of catastrophe I wish more AI skeptics would take this position but no, it's imperative to claim that it's completely useless. |
| |
| ▲ | mwhitfield an hour ago | parent [-] | | I've had *very* much the opposite experience. Very nearly every AI skeptic take I read has exactly this opinion, if not always so well-articulated (until the last section, which lost me). But counterarguments always attack the complete strawman of "AI is utterly useless," which very few people, at least within the confines of the tech and business commentariat, are making. | | |
| ▲ | Kiro an hour ago | parent [-] | | Maybe I'm focusing too much in the hardliners but I see it everywhere, especially in tech. | | |
| ▲ | layer8 13 minutes ago | parent | next [-] | | If you’re talking about forums and social media, or anything attention-driven, then the prevalence of hyperbole is normal. | |
| ▲ | emp17344 27 minutes ago | parent | prev [-] | | Where’s all the data showing productivity increases from AI adoption? If AI is so useful, it shouldn’t be hard to prove it. |
|
|
|
|
| ▲ | w_for_wumbo an hour ago | parent | prev | next [-] |
| This is what I wonder to, what is the end game?
Advance technology so that we can have anything that we want, whenever we want it.
Fly to distant galaxies.
Increase the options available to us and our offspring.
But ultimately, what will we gain from that?
Is it to say that we did it or is it for the pleasure of the process?
If it's for pleasure, then why have we made our processes so miserable for everyone involved? If it's to say that we did it, couldn't we not and say that we did? That's the whole point of fantasy.
Is Elon using AI to supplement his own lack of imagination? I could be wrong, this could be nonsense. I just can't make sense of it. |
| |
| ▲ | JohnMakin 12 minutes ago | parent [-] | | > Fly to distant galaxies Unless AI can change the laws of physics, extremely unlikely. |
|
|
| ▲ | eightman an hour ago | parent | prev | next [-] |
| The use case for AI is spam. |
|
| ▲ | crazygringo 2 hours ago | parent | prev | next [-] |
| > There is a vast chasm between what we, the users, and them, the investors, are “sold” in AI. We are told that AI will do our tasks faster and better than we can — that there is no future of work without AI. And that is a huge sell, one I’ve spent the majority of this post deconstructing from my, albeit limited, perspective. But they — the people who commit billions toward AI — are sold something entirely different. They are sold AGI, the idea of a transformative artificial intelligence, an idea so big that it can accommodate any hope or fear a billionaire might have. > Again, I think that AI is probably just a normal technology, riding a normal hype wave. And here’s where I nurse a particular conspiracy theory: I think the makers of AI know that. I think those committing billions towards AI know it too. It's not a conspiracy theory. All the talk about AGI is marketing fluff that makes for good quotes. All the investment in data centers and GPU's is for regular AI. It doesn't need AGI to justify it. I don't know if there's a bubble. Nobody knows. But what if it turns out that normal AI (not AGI) will ultimately provide so much value over the next couple decades that all the data centers being built will be used to max capacity and we need to build even more? A lot of people think the current level of investment is entirely economically rational, without any requirement for AGI at all. Maybe it's overshooting, maybe it's undershooting, but that's just regular resource usage modeling. It's not dependent on "coding consciousness" as the author describes. |
|
| ▲ | Animats 2 hours ago | parent | prev | next [-] |
| It's pretty clear that the financialization aspect of AI is a bubble. There's way too much market cap created by trading debt back and forth. How well AI will work remains an open question at this point. |
| |
| ▲ | milesskorpen 2 hours ago | parent [-] | | It's a big number - but still less than tech industry profits. | | |
| ▲ | Octoth0rpe an hour ago | parent [-] | | That is true, but not evenly distributed. Oracle for example: https://arstechnica.com/information-technology/2025/11/oracl... Also, it may be true that these companies theoretically have the cash flow to cover to spending, but that doesn't mean that they will be comfortable with that risk, especially as that risk becomes more likely in some kind of mass extinction event amongst AI startups. To concretize that a bit, the remote possibility of having to give up all your profits for 2 years to payoff DC investment is fine at 1% chance of happening, but maybe not so ok at a 40% chance. |
|
|
|
| ▲ | qoez 2 hours ago | parent | prev | next [-] |
| Best case is hardly a bubble. I definitely think this is a new paradigm that'll lead to something, even if the current iteration won't be the final version and we've probably overinvested a slight bit. |
| |
| ▲ | layer8 9 minutes ago | parent | next [-] | | The author thinks that the bubble is a given (and doesn’t have to spell doom), and the best case is that there isn’t anything worse in addition. | |
| ▲ | threetonesun 2 hours ago | parent | prev [-] | | Same as the dot-com bubble. Fundamentals were wildly off for some businesses, but you can also find almost every business that failed then running successfully today.
Personally I don't think sticking AI in every software is where the real value is, it's improving understanding of huge sets of data already out there. Maybe OpenAI challenges Google for search, maybe they fail, I'm still pretty sure the infrastructure is going to get used because the amount of data we collect and try to extract value from isn't going anywhere. | | |
|
|
| ▲ | philipkglass 2 hours ago | parent | prev | next [-] |
| I think that what is really behind the AI bubble is the same thing behind most money, power, and influence: land and resources. The AI future that is promised, whether to you and me or to the billionaires, requires the same thing: lots of energy, lots of land, and lots of water. If you just wanted land, water, and electricity, you could buy them directly instead of buying $100 million of computer hardware bundled with $2 million worth of land and water rights. Why are high end GPUs selling in record numbers if AI is just a cover story for the acquisition of land, electricity, and water? |
| |
| ▲ | bix6 2 hours ago | parent | next [-] | | But with this play they can inflate their company holdings and cash out in new rounds. It’s the ultimate self enrichment scheme! Nobody wants that crappy piece of land but now it’s got GPUs and we can leverage that into a loan for more GPUs and cash out along the way. | |
| ▲ | kjkjadksj 2 hours ago | parent | prev | next [-] | | Because then you can buy calls on the GPU companies | |
| ▲ | exceptione 2 hours ago | parent | prev [-] | | Valid question. What the OP talks about though is that these things were not for sale normally. My takeaway from his essay is that a few oligarchs get a pass to take over all energy, by means of a manufactured crisis. When a private company can construct what is essentially a new energy city with no people and no elected representation, and do this dozens of times a year across a nation to the point that half a century of national energy policy suddenly gets turned on its head and nuclear reactors are back in style, you have a sudden imbalance of power that looks like a cancer spreading within a national body.
He could have explained that better. Try to not look at the media drama the political actors give you each day, but look at the agenda the real powers laid bare- Trump is threatening an oil rich neighbor with war. A complete expensive as hell army blowing up 'drug boats' (claim) to make help the press sell it as a war on drugs. Yeah right. - Green energy projects, even running ones, get cancelled. Energy from oil and nuclear are both capital intensive and at the same time completely out-shined by solar and battery tech. So the energy card is a strong one to direct policy towards your interests. If you can turn the USA into a resource economy like Russia, than you can rule like a Russian oligarch. That is also why the admin sees no problem in destroying academia or other industries via tariffs; controlling resources is easier and more predictable than having to rely on an educated populace that might start to doubt the promise of the American Dream. | | |
| ▲ | amunozo an hour ago | parent [-] | | I did not think about it that way, but it makes perfect sense. And it is really scary. It hasn't even been a year since Trump's second term started. We still have three more years left. |
|
|
|
| ▲ | carlosjobim 2 hours ago | parent | prev | next [-] |
| Let's take the highest perspective possible: What is the value of technology which allows people communicate clearly with other people of any language? That is what these large language models have achieved. We can now translate pretty much perfectly between all the languages in the world. The curse from the tower of Babel has been lifted. There will be a time in the future, when people will not be able to comprehend that you couldn't exchange information regardless of personal language skills. So what is the value of that? Economically, culturally, politically, spiritually? |
| |
| ▲ | Herring 2 hours ago | parent | next [-] | | Language is a lot deeper than that. It's like if I say "we speak the same language", it means a lot more than just the ability to translate. It's talking about a shared past and worldview and hopefully future which I/we intend to invest in. | | | |
| ▲ | blauditore 2 hours ago | parent | prev | next [-] | | You could make the same argument about video conferencing: Yes, you can now talk to anyone anywhere anytime, and it's amazing. But somehow all big companies are convinced that in-person office work is more productive. | |
| ▲ | 4ndrewl 2 hours ago | parent | prev | next [-] | | Which languages couldn't we translate before? Not you, the individual. We, humanity? | | |
| ▲ | carlosjobim 2 hours ago | parent [-] | | Machine translation was horrible and completely unreliable before LLMs. And human translators are very expensive and slow in comparison. LLM is for translation as computers were for calculating. Sure, you could do without them before. They used to have entire buildings with office workers whose job it was to compute. | | |
| ▲ | gizajob 2 hours ago | parent [-] | | Google translate worked great long before LLMs. | | |
| ▲ | doug_durham 2 hours ago | parent | next [-] | | I disagree. It worked passably and was better than no translation. The depth, correctness, and nuance is much better with LLMs. | |
| ▲ | verdverm 2 hours ago | parent | prev | next [-] | | LLMs are not they only "AI" | |
| ▲ | Kiro 2 hours ago | parent | prev | next [-] | | I don't think you understand how off that statement is. It's also pretty ignorant considering Google Translate barely worked at all for many languages. So no, it didn't work great and even for the best possible language pair Google Translate is not in the same ballpark. | |
| ▲ | dwedge an hour ago | parent | prev | next [-] | | Not really long before, although I suppose it's relative. Google translate was pretty garbage until around 2016-2017 and then it started really improving | |
| ▲ | carlosjobim 2 hours ago | parent | prev [-] | | It really didn't. There were many languages which it couldn't handle at all, just making completely garbled output. It wasn't possible to use Google Translate professionally. |
|
|
| |
| ▲ | bix6 2 hours ago | parent | prev [-] | | We could communicate with people before LLMs just fine though? We have hand gestures and some people learn multiple languages and google translate was pretty solid. I got by just fine in countries where I didn’t know the language because hand gestures work or someone speaks English. What is the value of losing our uniqueness to a computer that lies and makes us all talk the same? | | |
| ▲ | Kiro an hour ago | parent | next [-] | | Incredible that we happen to be alive at the exact moment humanity peaked in its interlingual communication. With Google Translate and hand gestures there is no need to evolve it any further. | |
| ▲ | carlosjobim 2 hours ago | parent | prev [-] | | You can maybe order in a restaurant or ask the way with hand gestures. But surely you must be able to take a higher perspective than your own, and realize that there's enormous amounts of exchange between nations with differing language, and all of this relies on some form of translation. Hundreds of millions of people all over the world have to deal with language barriers. Google Translate was far from solid, the quality of translations were so bad before LLMs that it simply wasn't an option for most languages. It would sometimes even translate numbers incorrectly. | | |
| ▲ | Profan an hour ago | parent [-] | | LLMs are here and Google Translate is still bad (surely, if it was easy as just plugging the miraculous perfect llms into it, it would be perfect now?), I don't think people who think we've somehow solved translation actually understand how much it still deals extremely poorly with. And as others have said, language is more than just "I understand these words, this other person understands my words" (in the most literal sense, ignoring nuance here), but try getting that across to someone who believes you can solve language with a technical solution :) |
|
|
|
|
| ▲ | dvcoolarun 2 hours ago | parent | prev | next [-] |
| I believe it’s a bubble. Every app interface is becoming similar to ChatGPT, claiming they’ll “help you automate,” while drifting away from the app’s original purpose. Most of this feels like people trying to get rich off VC money — and VCs trying to get rich off someone else’s money. |
|
| ▲ | hollowturtle an hour ago | parent | prev | next [-] |
| The best AI is the one hidden, silent, ubiquitous that works and you feel it's not there. Apple devices but really many modern devices before the LLM hype era had a lot of AI we didn't know about. Today if I read a product has AI i feel let down cause most of the time is a not very well integrated ChatBot that if you will to spend some time sooner or later will impersonate Adolf Hitler and, who knows, maybe leaks sensitive data or apis meta. The bubble needs to burst so we can go back to silently pack products with useful ai features without telling the world |
|
| ▲ | jaketoronto an hour ago | parent | prev [-] |
| [dead] |