| ▲ | crazygringo 9 hours ago |
| Just to be clear, the article is NOT criticizing this. To the contrary, it's presenting it as expected, thanks to Solow's productivity paradox [1]. Which is that information technology similarly (and seemingly shockingly) didn't produce any net economic gains in the 1970's or 1980's despite all the computerization. It wasn't until the mid-to-late 1990's that information technology finally started to show clear benefit to the economy overall. The reason is that investing in IT was very expensive, there were lots of wasted efforts, and it took a long time for the benefits to outweigh the costs across the entire economy. And so we should expect AI to look the same -- it's helping lots of people, but it's also costing an extraordinary amount of money, and the few people it's helping is currently at least outweighed by the people wasting time with it and its expense. But, we should recognize that it's very early days, and that productivity will rise with time, and costs will come down, as we learn to integrate it with best practices. [1] https://en.wikipedia.org/wiki/Productivity_paradox |
|
| ▲ | kace91 9 hours ago | parent | next [-] |
| The comparison seems flawed in terms of cost. A Claude subscription is 20 bucks per worker if using personal accounts billed to the company, which is not very far from common office tools like slack. Onboarding a worker to Claude or ChatGPT is ridiculously easy compared to teaching a 1970’s manual office worker to use an early computer. Larger implementations like automating customer service might be more costly, but I think there are enough short term supposed benefits that something should be showing there. |
| |
| ▲ | abraxas 8 hours ago | parent | next [-] | | What if LLMs are optimizing the average office worker's productivity but the work itself simply has no discernable economic value? This is argued at length in Grebber's Bullshit Jobs essay and book. | | |
| ▲ | fdefitte 6 hours ago | parent | next [-] | | This is an underrated take. If you make someone 3x faster at producing a report nobody reads, you've improved nothing. The real gains from AI show up when it changes what work gets done, not just how fast existing work happens. Most companies are still in the "do the same stuff but with AI" phase. | | |
| ▲ | Stromgren 3 hours ago | parent | next [-] | | And if you make someone 3x faster at producing a report that 100 people has to read, but it now takes 10% longer to read and understand, you’ve lost overall value. | | |
| ▲ | anon-3988 2 hours ago | parent [-] | | You are forgetting that they are now going to use AI to summarize it back. | | |
| ▲ | kombookcha an hour ago | parent | next [-] | | This is one of my major concerns about people trying to use these tools for 'efficiency'. The only plausible value in somebody writing a huge report and somebody else reading it is information transfer. LLM's are notoriously bad at this. The noise to signal ratio is unacceptably high, and you will be worse off reading the summary than if you skimmed the first and last pages. In fact, you will be worse off than if you did nothing at all. Using AI to output noise and learn nothing at breakneck speeds is worse than simply looking out the window, because you now have a false sense of security about your understanding of the material. Relatedly, I think people get the sense that 'getting better at prompting' is purely a one-way issue of training the robot to give better outputs. But you are also training yourself to only ask the sorts of questions that it can answer well. Those questions that it will no longer occur to you to ask (not just of the robot, but of yourself) might be the most pertinent ones! | | |
| ▲ | notahacker 19 minutes ago | parent | next [-] | | Yep. The other way it can have net no impact is if it saves thousand of hours of report drafting and reading but misses the one salient fact buried in the observations that could actually save the company money. Whilst completely nailing the fluff. | |
| ▲ | birdsongs an hour ago | parent | prev | next [-] | | > LLM's are notoriously bad at this. The noise to signal ratio is unacceptably high I could go either way on the future of this, but if you take the argument that we're still early days, this may not hold. They're notoriously bad at this so far. We could still be in the PC DOS 3.X era in this timeline. Wait until we hit the Windows 3.1, or 95 equivalent. Personally, I have seen shocking improvements in the past 3 months with the latest models. | | |
| ▲ | kombookcha 19 minutes ago | parent | next [-] | | Personally I strongly doubt it. Since the nature of LLM's does not allow them semantic content or context, I believe it is inherently a tool unsuited for this task. As far as I can tell, it's a limitation of the technology itself, not of the amount of power behind it. Either way, being able to generate or compress loads of text very quickly with no understanding of the contents simply is not the bottleneck of information transfer between human beings. | |
| ▲ | mcny an hour ago | parent | prev [-] | | I would like to see the day when the context size is in gigabytes or tens of billions of tokens, not RAG or whatever, actual context. |
| |
| ▲ | kykeonaut an hour ago | parent | prev [-] | | > Those questions that it will no longer occur to you to ask (not just of the robot, but of yourself) might be the most pertinent ones! That is true, but then again also with google. You could see why some people want to go back to the "read the book" era where you didn't have google to query anything and had to make the real questions. |
| |
| ▲ | prmoustache an hour ago | parent | prev | next [-] | | This reminds me of that "telephone" kids game. https://en.wikipedia.org/wiki/Telephone_game | |
| ▲ | SpaceNoodled an hour ago | parent | prev [-] | | So what we now have is a very expensive and energy-intensive method for inflating data in a lossy manner. Incredible. | | |
|
| |
| ▲ | amelius 28 minutes ago | parent | prev | next [-] | | > The real gains from AI show up when it changes what work gets done, not just how fast existing work happens. Sadly AI is only capable of doing work that has already been done, thousands of times. | |
| ▲ | injidup 4 hours ago | parent | prev | next [-] | | Maybe the take is that those reports that people took a day to write were read by nobody in the first place and now those reports are being written faster and more of them are being produced but still nobody reads them. Thus productivity doesn't change. The solution is to get rid of all the people who write and process reports and empower the people who actually produce stuff to do it better. | | |
| ▲ | patrickk an hour ago | parent | next [-] | | > The solution is to get rid of all the people who write and process reports and empower the people who actually produce stuff to do it better. That’s the solution if you’re the business owner. That’s definitely not the solution if you’re a manager in charge of this useless activity, in fact, you should increase the amount of reports being written as much as humanly possible. The more underlings under you= more power and prestige. This is the principal-agent problem writ large. As the comment mentioned above, also see Graeber’s Bullshit Jobs essay and book. | |
| ▲ | beAbU 2 hours ago | parent | prev | next [-] | | The managerial class are like cats and closed doors. Ofcourse they don't read the reports, who has time to read it? But don't even think about not sending the report, they like to have the option of reading it if they choose to do so. A closed door removes agency from a cat, an absent report removes agency from a manager. | |
| ▲ | laserlight an hour ago | parent | prev [-] | | > Thus productivity doesn't change. Indeed, productivity has decreased, because now there’s more output that is waste and you are paying to generate that excess waste. |
| |
| ▲ | seanhunter 2 hours ago | parent | prev | next [-] | | What happens if (and I suspect this to be increasingly the case now) you make someone 3x faster at producing a report that nobody reads and those people now use LLMs to not read the report whereas they were not reading it in person before? Then everyone saves time, which they can spend producing more things which other people will not read and/or not reading the things that other people produce (using llms)? Productivity through the roof. | | |
| ▲ | carlosjobim an hour ago | parent | next [-] | | Now you know why GDP is higher than ever and people are poorer than ever. | |
| ▲ | nkrisc 40 minutes ago | parent | prev [-] | | Mmm I can’t wait to get home and grill up some Productivity for dinner. We’ll have so much Productivity and no jobs. Hopefully our billionaire overlords deign to feed us. |
| |
| ▲ | jacquesm 38 minutes ago | parent | prev | next [-] | | And the fact that you can make it 3x faster substantially increases the chances that nobody will read it in the first place. | |
| ▲ | Lerc an hour ago | parent | prev | next [-] | | What a load of nonsense, they won't be producing a report in a third of the time only to have no-one read it. They'll spend the same amount of time and produce a report three times the length, which will then go unread. | |
| ▲ | wiseowise 5 hours ago | parent | prev [-] | | Not a phase, I’d argue that 90% of modern jobs are bullshit to keep cattle occupied and economy rolling. | | |
| ▲ | nkrisc 38 minutes ago | parent | next [-] | | You know, that would almost be fine if everyone could afford a home and food and some pleasures. | |
| ▲ | Retric 4 hours ago | parent | prev | next [-] | | Jobs you don’t notice or understand often look pointless. HR on the surface seems unimportant, but you’d notice if the company stopped having health insurance or sending your taxes to the IRS etc etc. In the end when jobs are done right they seem to disappear. We notice crappy software or a poorly done HVAC system not clean carpets. | | |
| ▲ | nkrisc 37 minutes ago | parent | next [-] | | This just highlights the absurdity of having your employer responsible for your health insurance and managing your taxes for you. These should be handled by the government, equally for all. | |
| ▲ | jdasdf 2 hours ago | parent | prev [-] | | > HR on the surface seems unimportant, but you’d notice if the company stopped having health insurance or sending your taxes to the IRS etc etc. Interesting on how the very example you give for "oh this job isn't really bullshit" ultimately ends up being useless for the business itself, and exists only as a result of regulation. No, health insurance being provided by employers, or tax withholding aren't useful things for anyone, except for the state who now offloads its costs onto private businesses. |
| |
| ▲ | yoyohello13 5 hours ago | parent | prev | next [-] | | Your claim and the claims that all white collar jobs are going to disappear in 12-18 months cannot both be true. I guess we will see. | | |
| ▲ | onion2k 4 hours ago | parent | next [-] | | It's possible to automate the pointless stuff without realising it's pointless. | | | |
| ▲ | beeflet 5 hours ago | parent | prev | next [-] | | I think they can both be true. Perhaps the innovation of AI is not that it automates important work, but because it forces people to question if the work has already been automated or is even necessary. | |
| ▲ | zzrrt 4 hours ago | parent | prev [-] | | Well, if a lot of it is bullshit that can also be done more efficiently with AI, then 99% of white collar roles could be eliminated by the 1% using AI, and essentially both were very close to true. |
| |
| ▲ | palmotea 3 hours ago | parent | prev [-] | | > Not a phase, I’d argue that 90% of modern jobs are bullshit to keep cattle occupied and economy rolling. Cattle? You actually think that about other people? | | |
| ▲ | wao0uuno 2 hours ago | parent | next [-] | | I think what he meant was that the top 1% ruling class is keeping those bullshit jobs around to keep the poor people (their cattle) occupied so they won't have time and energy to think and revolt. | | |
| ▲ | Ekaros 2 hours ago | parent [-] | | Or for everyone in chain of command to have people to rule over. A common want for many in leadership positions. At least two ways, you want to control people. And your value to your peers is the amount of people or resources you control. |
| |
| ▲ | KoftaBob an hour ago | parent | prev [-] | | It seems more like they're implying it's those at the top think that about other people. |
|
|
| |
| ▲ | hattmall 8 hours ago | parent | prev | next [-] | | I find that highly unlikely, coding is the AIs best value use case by far. Right now office workers see marginal benefits but it's not like it's an order of magnitude difference. AI drafts an email, you have to check and edit it, then send it. In many cases it's a toss up if that actually saved time, and then if it did, it's not like the pace of work is break neck anyway, so the benefit is some office workers have a bit more idle time at the desk because you always tap some wall that's out of your control. Maybe AI saves you a Google search or a doc lookup here and there. You still need to check everything and it can cause mistakes that take longer too. Here's an example from today. Assistant is dispatching a courier to get medical records. AI auto completes to include the address. Normally they wouldn't put the address, the courier knows who we work with, but AI added it so why not. Except it's the wrong address because it's for a different doctor with the same name. At least they knew to verify it, but still mistakes like this happening at scale is making the other time savings pretty close to a wash. | | |
| ▲ | majormajor 4 hours ago | parent | next [-] | | Coding is a relatively verifiable and strict task: it has to pass the compiler, it has to pass the test suite, it has to meet the user's requests. There are a lot of white-collar tasks that have far lower quality and correctness bars. "Researching" by plugging things into google. Writing reports summarizing how a trend that an exec saw a report on can be applied to the company. Generating new values to share at a company all-hands. Tons of these that never touch the "real world." Your assistant story is like a coding task - maybe someone ran some tests, maybe they didn't, but it was verifiable. No shortage of "the tests passed, but they weren't the right test, this broke some customers and had to be fixed by hand" coding stories out there like it. There are pages and pages of unverifiable bullshit that people are sleepwalking through, too, though. Nobody already knows if those things helped or hurt, so nobody will ever even notice a hallucination. But everyone in all those fields is going to be trying really really hard to enumerate all the reasons it's special and AI won't work well for them. The "management says do more, workers figure out ways to be lazier" see-saw is ancient, but this could skew far towards "management demands more from fewer people" spectrum for a while. | | |
| ▲ | t43562 2 hours ago | parent | next [-] | | Code may have to compile but that's a lowish bar and since the AI is writing the tests it's obvious that they're going to pass. In all areas where there's less easy ways to judge output there is going to be correspondingly more value to getting "good" people. Some AI that can produce readable reports isn't "good" - what matters is the quality of the work and the insight put into it which can only be ensured by looking at the workers reputation and past history. | |
| ▲ | pydry 8 minutes ago | parent | prev [-] | | >Coding is a relatively verifiable and strict task: it has to pass the compiler, it has to pass the test suite, it has to meet the user's requests. Except the test suite isnt just something that appears and the bugs dont necessarily get covered by the test suite. The bugginess of a lot of the software i use has spiked in a very noticeable way, probably due to this. >But everyone in all those fields is going to be trying really really hard to enumerate all the reasons it's special and AI won't work well for them. No, not everyone. Half of them are trying to lean in to the changing social reality. The gaslighting from the executive side, on the other hand, is nearly constant. |
| |
| ▲ | sanex 6 hours ago | parent | prev | next [-] | | Not all code generates economic value. See slacks, jiras, etc constant ui updates. | | |
| ▲ | fakedang 2 hours ago | parent [-] | | That makes it a perfect use case for AI, since now you don't need a dev for that. Any devs doing that would, imo, be effectively performing one of David Graeber's bullshit jobs. |
| |
| ▲ | vrighter 3 hours ago | parent | prev | next [-] | | Code is much much harder to check for errors than an email. Consider, for example, the following python code: x = (5)
vs x = (5,)
One is a literal 5, and the other is a single element tuple containing the number 5. But more importantly, both are valid code.Now imagine trying to spot that one missing comma among the 20kloc of code one so proudly claims AI helped them "write", especially if it's in a cold path. You won't see it. | | |
| ▲ | lock1 an hour ago | parent [-] | | > Code is much much harder to check for errors than an email. Disagree. Even though performing checks on dynamic PLs is much harder than on static ones, PLs are designed to be non-ambiguous. There should be exactly 1 interpretation for any syntactically valid expression. Your example will unambiguously resolve to an error in a standard-conforming Python interpreter. On the other hand, natural languages are not restricted by ambiguity. That's why something like Poe's law exists. There's simply no way to resolve the ambiguity by just staring at the words themselves, you need additional information to know the author's intent. In other words, an "English interpreter" cannot exist. Remove the ambiguities, you get "interpreter" and you'll end up with non-ambiguous, Python-COBOL-like languages. With that said, I agree with your point that blindly accepting 20kloc is certainly not a good idea. |
| |
| ▲ | nradov 7 hours ago | parent | prev [-] | | LLMs might not save time but they certainly increase quality for at least some office work. I frequently use it to check my work before sending to colleagues or customers and it occasionally catches gaps or errors in my writing. | | |
| ▲ | toraway 6 hours ago | parent [-] | | But that idealized example could also be offset by another employee who doubles their own output by churning out lower-quality unreviewed workslop all day without checking anything, while wasting other people's time. | | |
| ▲ | sunrunner an hour ago | parent [-] | | Something I call the 'Generate First, Review Never' approach, seemingly favoured by my colleagues, and which has the magical quality of increasing the overall amount of work done through an increased amount of time taken by N receivers of low-quality document having to review, understand and fact check said document. See also: AI-Generated “Workslop” Is Destroying Productivity [1] [1] https://hbr.org/2025/09/ai-generated-workslop-is-destroying-... |
|
|
| |
| ▲ | jen729w 12 minutes ago | parent | prev | next [-] | | Have had job. Can confirm. | |
| ▲ | Aurornis 7 hours ago | parent | prev | next [-] | | > but the work itself simply has no discernable economic value? This is argued at length in Grebber's Bullshit Jobs essay and book. That book was very different than what I expected from all of the internet comment takes about it. The premise was really thin and did't actually support the idea that the jobs don't generate value. It was comparing to a hypothetical world where everything is perfectly organized, everyone is perfectly behaved, everything is perfectly ordered, and therefore we don't have to have certain jobs that only exist to counter other imperfect things in society. He couldn't even keep that straight, though. There's a part where he argues that open source work is valuable but corporate programmers are doing bullshit work that isn't socially productive because they're connecting disparate things together with glue code? It didn't make sense and you could see that he didn't really understand software, other than how he imagined it fitting into his idealized world where everything anarchist and open source is good and everything corporate and capitalist is bad. Once you see how little he understands about a topic you're familiar with, it's hard to unsee it in his discussions of everything else. That said, he still wasn't arguing that the work didn't generate economic value. Jobs that don't provide value for a company are cut, eventually. They exist because the company gets more benefit out of the job existing than it costs to employ those people. The "bullshit jobs" idea was more about feelings and notions of societal impact than economic value. | | |
| ▲ | EliRivers 3 hours ago | parent | next [-] | | "They exist because the company gets more benefit out of the job existing than it costs to employ those people." Sure, but there's no such thing as "the company." That's shorthand - a convenient metaphor for a particular bunch of people doing some things. So those jobs can exist if some people - even one person - gets more benefit out of the job existing than it costs that person to employ them. For example, a senior manager padding his department with non-jobs to increase headcount, because it gives him increased prestige and power, and the cost to him of employing that person is zero. Will those jobs get cut "eventually"? Maybe, but I've seen them go on for decades. | |
| ▲ | AnthonyMouse 3 hours ago | parent | prev | next [-] | | > There's a part where he argues that open source work is valuable but corporate programmers are doing bullshit work that isn't socially productive because they're connecting disparate things together with glue code? I don't know if maybe he wasn't explaining it well enough, but that kind of reasoning makes some sense. A lot of code is written because you want the output from Foo to be the input to Bar and then you need some glue to put them together. This is pretty common when Foo and Bar are made by different people. With open source, someone writes the glue code, publishes it, and then nobody else has to write it because they just use what's published. In corporate bureaucracies, Company A writes the glue code but then doesn't publish it, so Company B which has the same problem has to write it again, but they don't publish it either. A hundred companies are then doing the work that only really needed to be done once, which makes for 100 times as much work, a 1% efficiency rate and 99 bullshit jobs. | |
| ▲ | mikem170 6 hours ago | parent | prev | next [-] | | Hmmm, I got something different. I thought that Bullshit Jobs was based on people who self reported that their jobs were pointless. He detailed these types of jobs, the negative psychological impact this can have on employees, and the kicker was that these jobs don't make sense economically, the bureaucratization of the health care and education sectors for example, in contrast so many other professions that actually are useful. Other examples were status-symbol employees, sycophants, duct-tapers, etc. I thought he made a case for both societal and economic impact. | |
| ▲ | wiseowise 5 hours ago | parent | prev | next [-] | | > They exist because the company gets more benefit out of the job existing than it costs to employ those people. Not necessarily, I’ve seen a lot of jobs that were just flying under the radar. Sort of like a cockroach that skitters when light is on but roams freely in the dark. | | | |
| ▲ | wolvesechoes 3 hours ago | parent | prev | next [-] | | > The "bullshit jobs" idea was more about feelings and notions of societal impact than economic value. But he states that expressis verbis, so your discovery is not that spectacular. Although he gives examples of jobs, or some aspects of jobs, that don't help to deliver what specific institutions aim to deliver. Example would be bureaucratization of academia. | |
| ▲ | ccortes 6 hours ago | parent | prev | next [-] | | > It was comparing to a hypothetical world where everything is perfectly organized, everyone is perfectly behaved, everything is perfectly ordered, and therefore we don't have to have certain jobs that only exist to counter other imperfect things in society. > Jobs that don't provide value for a company are cut, eventually. Uhm, seems like Greaber is not the only one drawing conclusions from a hypothetical perfect world | |
| ▲ | DiggyJohnson 6 hours ago | parent | prev [-] | | Greaber’s best book is his ethnography “Lost People” and it’s one of his least read works. Bullshit Jobs was never intended to be read as seriously as it is criticized. | | |
| ▲ | Aurornis 6 hours ago | parent [-] | | Honestly this is how every critique of Graeber goes in my experience: As soon as his works are discussed beyond surface level, the goalposts start zooming around so fast that nothing productive can be discussed. I tried to respond to the specific conversation about Bullshit Jobs above. In my experience, the way this book is brought up so frequently in online conversations is used as a prop for whatever the commenter wants it to mean, not what the book actually says. I think Graeber did a fantastic job of picking "bullshit jobs" as a topic because it sounds like something that everyone implicitly understands, but how it's used in conversation and how Graeber actually wrote about the topic are basically two different things |
|
| |
| ▲ | jama211 7 hours ago | parent | prev | next [-] | | I think it’s more likely that the same amount of work is getting done, just it’s far less taxing. And that averages are funny things, for developers it’s undeniably a huge boost, but for others it’s creating friction. | |
| ▲ | lolive 4 hours ago | parent | prev | next [-] | | We made an under-the-radar optimization in a data flow in my company. A given task is now much more freshData-assisted that it used to. Was a LLM used during that optimization? Yes. Who will correlate the sudden productivity improvement with our optimization of the data flow with the availability of a LLM to do such optimizations fast enough that no project+consultants+management is needed ? No one. Just like no one is evaluating the value of a hammer or a ladder when you build a house. | | |
| ▲ | camgunz 3 hours ago | parent | next [-] | | But you would see more houses, or housing build costs/bids fall. This is where the whole "show me what you built with AI" meme comes from, and currently there's no substitute for SWEs. Maybe next year or next next year, but mostly the usage is generating boring stuff like internal tool frontends, tests, etc. That's not nothing, but because actually writing the code was at best 20% of the time cost anyway, the gains aren't huge, and won't be until AI gets into the other parts of the SDLC (or the SDLC changes). | |
| ▲ | fragmede 3 hours ago | parent | prev | next [-] | | CONEXPO, World of Concrete, and NAHB IBS is where vendors go to show off their new ladders and the attendees totally evaluate the value of those ladders vs their competitors. | |
| ▲ | ViewTrick1002 3 hours ago | parent | prev [-] | | Is there a productivity improvement resulting tangible economic results coming from that optimization? It’s easy to convince yourself that it is, and anyone can massage some internal metric enough to prove their desired outcome. |
| |
| ▲ | overgard 6 hours ago | parent | prev | next [-] | | I think this is extremely common and nobody wants to admit to it! | |
| ▲ | protocolture 7 hours ago | parent | prev | next [-] | | Would hardly drag Graeber into this, theres a laundry list of issues with his research. Most "Bullshit Jobs" can already be automated, but can isnt always should or will. Graeber is a capex thinker in an opex world. | |
| ▲ | nradov 7 hours ago | parent | prev | next [-] | | Bullshit Jobs is one of those "just so" stories that seems truthy but doesn't stand up to any critical evaluation. Companies are obviously not hesitant to lay off unproductive workers. While in large enterprises there is some level of empire building where managers hire more workers than necessary just to inflate their own importance, in the long run those businesses fall to leaner competitors. | | |
| ▲ | ccortes 6 hours ago | parent | next [-] | | > in the long run those businesses fall to leaner competitors This is not true at all. You can find plenty of examples going either way but it’s far from truth from being a universal reality | |
| ▲ | wiseowise 5 hours ago | parent | prev | next [-] | | > Companies are obviously not hesitant to lay off unproductive workers. Companies are obviously not hesitant to lay off anyone, especially for cost saving. It is interesting how you think that people are laid off because they’re unproductive. | |
| ▲ | busterarm 5 hours ago | parent | prev [-] | | It's only after decades of experience and hindsight that you realize that a lot of the important work we spend our time on has extremely limited long-term value. Maybe you're lucky enough to be doing cutting edge research or do something that really seriously impacts human beings, but I've done plenty of "mission critical right fucking now" work that a week from now (or even hours from now, when I worked for a content marketing business) is beyond irrelevant. It's an amazing thing watching marketing types set money on fire burning super expensive developer time (but salaried, so they discount the cost to zero) just to make their campaigns like 2-3% more efficient. I've intentionally sat on plenty of projects that somebody was pushing really hard for because they thought it was the absolute right necessary thing at the time and the stakeholder realized was pointless/worthless after a good long shit and shower. This one move has saved literally man years of work to be done and IMO is the #1 most important skill people need to learn ("when to just do nothing"). |
| |
| ▲ | groundzeros2015 8 hours ago | parent | prev | next [-] | | And that book sort of vaguely hints around at all these jobs that are surely bullshit but won’t identify them concretely. Not recognizing the essential role of sales seemed to be a common mistake. | | |
| ▲ | bubblewand 7 hours ago | parent [-] | | What counts as “concretely”? And I don’t recall it calling sales bullshit. It identified advertising as part of the category that it classed as heavily-bullshit-jobs for reason of being zero-sum—your competitor spends more, so you spent more to avoid falling behind, standard red queen’s race. (Another in this category was the military, which is kinda the classic case of this—see also, the Missile Gap, the dreadnought arms race, et c.) But not sales, IIRC. | | |
| ▲ | groundzeros2015 6 hours ago | parent | next [-] | | > And I don’t recall it calling sales bullshit. It says stuff like why can’t a customer just order from an online form? The employee who helps them doesn’t do anything except make them feel better. Must be a bullshit job. It talks specifically about my employees filling internal roles like this. > advertising I understand the arms race argument, but it’s really hard to see what an alternative looks like. People can spend money to make you more aware of something. You can limit some modes, but that kind of just exists. I don’t see how they aren’t performing an important function. | | |
| ▲ | tehjoker 4 hours ago | parent [-] | | It's an important function in a capitalist economy. Socialist economies are like "adblock for your life". That said, some advertising can be useful to inform consumers that a good exists, but convincing them they need it by synthesizing desires or fighting off competitors? Useless and socially detrimental. | | |
| ▲ | dns_snek 4 hours ago | parent | next [-] | | > Socialist economies are like "adblock for your life". There's nothing inherent to socialism that would preclude advertising. It's an economic system where the means of production (capital) is owned by the workers or the state. In market socialism you still have worker cooperatives competing on the market. | |
| ▲ | Wilder7977 4 hours ago | parent | prev | next [-] | | Plus, a core part of what qualifies as a bullshit job is that the person doing it feels that it's a bullshit job. The book is a half-serious anthropological essay, not an economic treaty. | | |
| ▲ | wolvesechoes 2 hours ago | parent [-] | | Yeah, guy states that in multiple places, and yet here we are, with an impression that most people referencing the book apparently didn't read it. |
| |
| ▲ | inglor_cz 3 hours ago | parent | prev [-] | | "Socialist economies are like "adblock for your life"." Ever actually lived in anything approaching one? Yeah, if the stores are empty, it does not make sense to produce ads for stuff that isn't there ... ... but we still had ads on TV, surprisingly, even for stuff that was in shortage (= almost everything). Why? Because the Plan said so, and disrespecting the Plan too openly would stray dangerously close to the crime of sabotage. You have no idea. | | |
| ▲ | dns_snek 2 hours ago | parent [-] | | None of that is inherent to socialism. There can be good and bad management, freedom and authoritarianism in any economic system. | | |
| ▲ | inglor_cz 2 hours ago | parent [-] | | Socialist economies larger than kibbutzes could only be created and sustained by totalitarian states. Socialism means collective ownership of means of production. And people won't give up their shops and fields and other means of production to the government voluntarily, at least not en masse. Thus they have to be forced at a gunpoint, and they always were. All the subsequent horror is downstream from that. This is what is inherent to building a socialist economy: mass expropriation of the former "exploitative class". The bad management of the stolen assets is just a consequence, because ideologically brainwashed partisans are usually bad at managing anything including themselves. | | |
| ▲ | dns_snek 2 hours ago | parent [-] | | This is exactly what I meant, a centrally-planned economy where the state owns everything and people are forced to give everything up is just one terrible (Soviet) model, not some defining feature of socialism. Yugoslavia was extremely successful, with economic growth that matched or exceeded most capitalist European economies post-WW2. In some ways it wasn't as free as western societies are today but it definitely wasn't totalitarian, and in many ways it was more free - there's a philosophical question in there about what freedom really is. For example Yugoslavia made abortion a constitutionally protected right in the 70s. I don't want to debate the nuances of what's better now and what was better then as that's beside the point, which is that the idiosyncrasies of the terrible Soviet economy are not inherent to "socialism", just like the idiosyncrasies of the US economy aren't inherent to capitalism. | | |
| ▲ | inglor_cz an hour ago | parent [-] | | "just one terrible (Soviet) model" It is the model, introduced basically everywhere where socialism was taken seriously. It is like saying that cars with four wheels are just one terrible model, because there were a few cars with three wheels. Yugoslavia was a mixed economy with a lot of economic power remaining in private hands. You cannot point at it and say "hey, successful socialism". Tito was a mortal enemy of Stalin, stroke a balanced neither-East-nor-West, but fairly friendly to the West policy already in 1950, and his collectivization efforts were a fraction of what Marxist-Leninist doctrine demands. You also shouldn't discount the effect of sending young Yugoslavs to work in West Germany on the total balance sheet. A massive influx of remittances in Deutsche Mark was an important factor in Yugoslavia getting richer, and there was nothing socialist about it, it was an overflow of quick economic growth in a capitalist country. | | |
| ▲ | dns_snek an hour ago | parent [-] | | You've created a tautology: Socialism is bad because bad models are socialism and better models are not-socialism. > You cannot point at it and say "hey, successful socialism" Yes I can because ideological purity doesn't exist in the real world. All of our countries are a mix of capitalist and socialist ideas yet we call them "capitalist" because that's the current predominant organization. > Tito was a mortal enemy of Stalin, stroke a balanced neither-East-nor-West, but fairly friendly to the West policy already in 1950, and his collectivization efforts were a fraction of what Marxist-Leninist doctrine demands. You're making my point for me, Yugoslavia was completely different from USSR yet still socialist. Socialism is not synonymous with Marxist-Leninist doctrine. It's a fairly simple core idea that has an infinite number of possible implementations, one of them being market socialism with worker cooperatives. Aside from that short period post-WW2, no socialist or communist nation has been allowed to exist without interference from the US through oppressive economic sanctions that would cripple and destroy any economy regardless of its economic system, but people love nothing more than to draw conclusions from these obviously-invalid "experiments". "You" (and I mean the collective you) are essentially hijacking the word "socialism" to simply mean "everything that was bad about the USSR". The system has been teaching and conditioning people to do that for decades, but we should really be more conscious and stop doing that. |
|
|
|
|
|
|
| |
| ▲ | thesmtsolver2 6 hours ago | parent | prev [-] | | How does that make advertising a bullshit job? The only way advertising won't exist or won't be needed is when humanity becomes a hive mind and removes all competition. | | |
| ▲ | bubblewand 5 hours ago | parent | next [-] | | The parts that are only done to maintain status quo with a competitor aren’t productive, and that’s quite a bit of it. Two (or more) sides spend money, nothing changes. No good is produced. The whole exercise is basically an accident. Like when a competing country builds their tenth battleship, so you commission another one to match them. The world would have been better if neither had been build. Money changed hands (one supposes) but the aim of the whole exercise had no effect. It was similar to paying people to dig holes a fill them back in again, to the tune of serious money. This was so utterly stupid and wasteful that there was a whole treaty about it, to try to prevent so many bullshit jobs from being created again. Or when Pepsi increases their ad spending in Brazil, so Coca Cola counters, and much of the money ends up accomplishing little except keeping things just how they were. That component or quality of the ad industry, the book claims, is bullshit, on account of not doing any good. The book treats of several ways in which a job might be bullshit, and just kinda mentions this one as an aside: the zero-sum activity. It mostly covers other sorts, but this is the closest I can recall it coming to declaring sales “bullshit” (the book rarely, bordering on never, paints even most of an entire industry or field as bullshit, and advertising isn’t sales, but it’s as close as it got, as I recall) | |
| ▲ | wiseowise 5 hours ago | parent | prev [-] | | Best product should be picked according to requirements by LLM without bullshit advertising. |
|
|
| |
| ▲ | emp17344 8 hours ago | parent | prev [-] | | The thesis of Bullshit Jobs is almost universally rejected by economists, FYI. There’s not much of value to obtain from the book. | | |
| ▲ | simonask an hour ago | parent | next [-] | | As a layman, I have to say the collective credibility of economists does not inspire confidence. | |
| ▲ | wolvesechoes 2 hours ago | parent | prev | next [-] | | Not surprising, because thesis is not about economy. > There’s not much of value to obtain from the book. Anthropological insight has much more value than anything economists may produce on economy. | |
| ▲ | gzread 3 hours ago | parent | prev [-] | | Why should I believe "economists" over "David Grabber"? |
|
| |
| ▲ | vidarh 11 minutes ago | parent | prev | next [-] | | A $20 Claude subscription lets you scratch the surface. A $20 Claude subscription without training means you have a lot of people spending time figuring out how to use it, and then maybe getting a bit of payback, but earning back that training is going to take time. Getting people to figure out how to enter questions is easy. Getting people to a point where they don't burn up all the savings by getting into unproductive conversations with the agent when it gets something wrong, is not so easy. | |
| ▲ | gruez 8 hours ago | parent | prev | next [-] | | How viable are the $20/month subscriptions for actual work and are they loss making for Anthropic? I've heard both of people needing to get higher tiers to get anything done in Claude Code and also that the subscriptions are (heavily?) subsidized by Anthropic, so the "just another $20 SaaS" argument doesn't sound too good. | | |
| ▲ | simonw 8 hours ago | parent | next [-] | | I am confident that Anthropic make revenue from that $20 than the electricity and server costs needed to serve that customer. Claude Code has rate limits for a reason: I expect they are carefully designed to ensure that the average user doesn't end up losing Anthropic money, and that even extreme heavy users don't cause big enough losses for it to be a problem. Everything I've heard makes me believe the margins on inference are quite high. The AI labs lose money because of the R&D and training costs, not because they're giving electricity and server operational costs away for free. | | |
| ▲ | tverbeure 7 hours ago | parent | next [-] | | Nobody questions that Anthropic makes revenue from a $20 subscription. The opposite would be very strange. | | |
| ▲ | simonw 6 hours ago | parent | next [-] | | A lot of people believe that Anthropic lose money selling tokens to customers because they are subsidizing it for growth. | | |
| ▲ | Drakim 3 hours ago | parent [-] | | But that has zero effect on revenue, it only affects profit. |
| |
| ▲ | brandensilva 7 hours ago | parent | prev [-] | | Yeah it's the caching that's doing the work for them though honestly. So many cached queries saving the GPUs from hard hits. | | |
| ▲ | xienze 3 hours ago | parent [-] | | How is caching implemented in this scenario? I find it unlikely that two developers are going to ask the same exact question, so at a minimum some work has to be done to figure out “someone’s asked this before, fetch the response out of the cache.” But then the problem is that most questions are peppered with specific context that has to be represented in the response, so there’s really no way to cache that. | | |
| ▲ | marcyb5st 3 hours ago | parent [-] | | From my understanding (which is poor at best), the cache is about the separate parts of the input context. Once the LLM read a file the content of that file is cached (i.e. some representation that the LLM creates for that specific file, but I really have no idea how that works). So the next time you bring either directly or indirectly that file into the context the LLM doesn't have to do a full pass, but pull its understanding/representation from the cache and uses that to answer your question/perform the task. |
|
|
| |
| ▲ | Esophagus4 8 hours ago | parent | prev | next [-] | | I always assumed that with inference being so cheap, my subscription fees were paying for training costs, not inference. | | |
| ▲ | beAbU 2 hours ago | parent | next [-] | | Is inference really that cheap? Why can't I do it at home with a reasonable amount of money? | |
| ▲ | simonw 6 hours ago | parent | prev | next [-] | | Anthropic and OpenAI are both well documented as losing billions of dollars a year because their revenue doesn't cover their R&D and training costs, but that doesn't mean their revenue doesn't cover their inference costs. | | |
| ▲ | overgard 6 hours ago | parent | next [-] | | Does it matter if they can't ever stop training though? Like, this argument usually seems to imply that training is a one-off, not an ongoing process. I could save a lot of money if I stopped eating, but it'd be a short lived experiment. I'll be convinced they're actually making money when they stop asking for $30 billion funding rounds. None of that money is free! Whoever is giving them that money wants a return on their investment, somehow. | | |
| ▲ | vidarh 3 hours ago | parent | next [-] | | At some point the players will need to reach profitability. Even if they're subsidising it with other revenue - they'll only be willing to do that as long as it drives rising inference revenue. Once that happens, whomever is left standing can dial back the training investment to whatever their share of inference can bear. | | |
| ▲ | ben_w 3 hours ago | parent [-] | | > Once that happens, whomever is left standing can dial back the training investment to whatever their share of inference can bear. Or, if there's two people left standing, they may compete with each other on price rather than performance and each end up with cloud compute's margins. | | |
| ▲ | vidarh an hour ago | parent [-] | | Sure, but they will still need to dial it back to a point where they can fund it out of inference at some point. The point is that the fact they can't do that now is irrelevant - it's a game of chicken at the moment, and that might kill some of them, but the game won't last forever. |
|
| |
| ▲ | simonw 5 hours ago | parent | prev | next [-] | | It matters because as long as they are selling inference for less than it costs to serve they have a potential path to profitability. Training costs are fixed at whatever billions of dollars per year. If inference is profitable they might conceivably make a profit if they can build a model that's good enough to sign up vast numbers of paying customers. If they lose even more money on each new customer they don't have any path to profitability at all. | | |
| ▲ | citrin_ru an hour ago | parent [-] | | > If they lose even more money on each new customer they don't have any path to profitability at all. In theory they can increase prices once the customers will be hocked up. That's how many startups works. |
| |
| ▲ | krainboltgreene 5 hours ago | parent | prev [-] | | There's an argument to be made that a "return on investment by way of eliminating all workers" is a reasonable result for the capitalists. | | |
| ▲ | generic92034 3 hours ago | parent [-] | | At least until they are running out of customers. And/or societies with mass-unemployment destabilize to a degree that is not conducive for capitalists' operations. | | |
|
| |
| ▲ | vrighter 3 hours ago | parent | prev [-] | | Models are fixed. They do not learn post training. Which means that training needs to be ongoing. So the revenue covers the inference? So what? All that means is that it doesn't cover your costs and you're operating at a loss. Because it doesn't cover the training that you can't stop doing either. |
| |
| ▲ | smashed 7 hours ago | parent | prev [-] | | Doubtful |
| |
| ▲ | what 5 hours ago | parent | prev [-] | | >make revenue from that $20 than the electricity and server costs needed to serve that customer Seems like a pretty dumb take. It’s like saying it only takes $X in electricity and raw materials to produce a widget that I sell for $Y. Since $Y is bigger than $X, I’m making money! Just ignore that I have to pay people to work the lines. Ignore that I had to pay huge amounts to build the factory. Ignore every other cost. They can’t just fire everyone and stop training new models. | | |
| |
| ▲ | _jss 7 hours ago | parent | prev | next [-] | | Merely for the viability part: I use the $20/mo plan now, but only as a part-time independent dev. I will hit rate-limits with Opus on any moderately complex app. If I am on a roll, I will flip on Extra Usage. I prototyped a fully functional and useful niche app in ~6 total hours and $20 of extra usage, and it's solid enough and proved enough value to continue investing in and eventually ship to the App store. Without Claude I likely wouldn't have gotten to the finished prototype version to use in the real world. For Indy dev, I think LLMs are a new source of solutions. This app is too niche to justify building and marketing without LLM assistance. It likely won't earn more than $25k/year but good enough! | |
| ▲ | Aurornis 7 hours ago | parent | prev | next [-] | | I don't think the assumption that Anthropic is losing money on subscriptions holds up. I think each additional customer provides more revenue than the cost to run their inference, on average. For people doing work with LLMs as an assistant for codebase searching, reviews, double checks, and things like that the $20/month plan is more than fine. The closer you get to vibecoding and trying to get the LLM to do all the work, the more you need the $100 and $200 plans. On the ChatGPT side, the $20/month subscription plan for GPT Codex feels extremely generous right now. I tried getting to the end of my window usage limit one day and could not. > so the "just another $20 SaaS" argument doesn't sound too good Having seen several company's SaaS bills, even $100/month or $200/month for developers would barely change anything. | |
| ▲ | 8note 8 hours ago | parent | prev [-] | | id guess the 200 subscription sufficient per person. but at that point you could go for a bugger one and split amongst headcount |
| |
| ▲ | jstummbillig 2 hours ago | parent | prev | next [-] | | I see no reason to believe that just handing a Claude subscription to everyone in a company simply creates economic benefit. I don't think it's easier than "automating customer service". It's actually very strange. I think it could definitely already create economic benefit, after someone instructed clearly how to use it and how to integrate it in your work. Most people are really not good at figuring that out on their own, in a busy workday, when left to their own devices and companies are just finding out where the ball is moving and what to organize around too. So I can totally see a lot of failed experiments and people slowly figuring stuff out, and all of that not translating to measurable surpluses in a corp, in a setup similar to what OP laid out. | |
| ▲ | azuanrb 7 hours ago | parent | prev | next [-] | | $20 is not useable, need $100 plan at least for development purposes. That is a lot of money for some countries. In my country, that can be 1/10 of their monthly salary. Hard to get approval on it. It is still too expensive right now. | | |
| ▲ | iwontberude 2 hours ago | parent [-] | | Yeah it’s not obvious at first but a big project will cause usage to skyrocket bc of how much context it will stuff with reading files. I can use my $20 subscription’s 5 hour limit in mere seconds. |
| |
| ▲ | 46493168 8 hours ago | parent | prev | next [-] | | >I think there are enough short term supposed benefits that something should be showing there. As measured by whom? The same managers who demanded we all return to the office 5 days a week because the only way they can measure productivity is butts in seats? | | |
| ▲ | bryanlarsen 8 hours ago | parent [-] | | Productivity is the ratio of outputs to inputs, both measured in dollars. | | |
| ▲ | tonyedgecombe 4 hours ago | parent [-] | | Productivity is the ratio of real GDP to total hours worked. | | |
| ▲ | bryanlarsen 18 minutes ago | parent [-] | | That's labor productivity, a different measure. But the original article references labor productivity, so your definition is more relevant. |
|
|
| |
| ▲ | TimByte an hour ago | parent | prev | next [-] | | I think the subscription price is only the visible tip of the iceberg | |
| ▲ | meager_wikis 8 hours ago | parent | prev | next [-] | | If anything, the 'scariness' of an old computer probably protected the company in many ways. AI's approachability to the average office worker, specifically how it makes it seem like it easy to deploy/run/triage enterprise software, will continue to pwn. | |
| ▲ | geraneum an hour ago | parent | prev | next [-] | | > A Claude subscription is 20 bucks per worker Talking about macro economics, I don’t think that number is correct. | |
| ▲ | dahcryn 4 hours ago | parent | prev | next [-] | | not true at all, onboarding is complex too. E.g. you cant just connect claude to your outlook, or have it automate stuff in your CRM. As a office drone, you don't have the admin permissions to setup those connections at all. And that's the point here: value is handicapped by the web interface, and we are stuck there for the foreseeable future until the tech teams get their priorities straight and build decent data integration layers, and workflow management platforms. | |
| ▲ | overgard 6 hours ago | parent | prev | next [-] | | I've never looked at enterprise licensing, but regular license wise, a Claude subscription is actually $200 a month. I don't count the $20 or $100 tiers because they're too limited to be useful (especially professionally!) | |
| ▲ | vessenes 7 hours ago | parent | prev | next [-] | | Agreed. We do have a way to see the financial impact - just add up Anthropic and oAI's reported revenues -> something like $30b in annual run rate. Given growth rates, (stratospheric), it seems reasonable to conclude informed buyers see economic and/or strategic benefit in excess of their spend. I certainly do! That puts the benefits to the economy at just around where Mastercard's benefits are, on a dollar basis. But with a lot more growth. Add something in there for MS and GOOG, and we're probably at least another $5b up. There are only like 30 US companies with > $100bn in revenues; at current growth rates, we'll see combined revenues in this range in a year. All this is sort of peanuts though against 29 trillion GDP, 0.3%. Well not peanuts, it's boosting the US GDP by 10% of its historical growth rate, but the bull case from singularity folks is like 10%+ GDP growth; if we start seeing that, we'll know it. All that said, there is real value being added to the economy today by these companies. And no doubt a lot of time and effort spent figuring out what the hell to do with it as well. | | |
| ▲ | mikem170 6 hours ago | parent [-] | | Investors are optimistic, but what will this new tech be used for? Advertising? Propaganda? Surveillance? Drone strikes? Does profitable always equal useful? Might other cultures justifiably think differently, like the Amish? | | |
| ▲ | vessenes 6 hours ago | parent [-] | | The Amish are skilled at getting cash from the “English” as they call non-Amish. I imagine they also think that the money they receive is roughly tied to value they create. I wasn’t talking valuations, just revenue - money that CFOs and individuals spent so far, and are planning on spending. I also didn’t talk profitable. Upshot, though, I don’t think it’s just a US thing to say that when money exchanges hands, generally both parties feel they are better off, and therefore there is value implied in a transaction. As to what it will be used for: yes. | | |
| ▲ | mikem170 5 hours ago | parent [-] | | You did specify revenue. The original comment mentioned benefits. I was thinking that the two are different. |
|
|
| |
| ▲ | y42 2 hours ago | parent | prev | next [-] | | Problem is, that just having a Claude subscription doesn't make you productive. Most of those talks happen in a "tech'ish" environments. Not every business is about coding. Real life example: A client came to me asking how to compare orders against order confirmation from the vendor. They come as PDF files. Which made me wonder: Wait, you don't have any kind of API or at least structured data that the vendor gives you? Nope. And here you are. I am not talking about a niche business. I assume that's a broader problem. Tech can probably automate everything and this since 30 years. Still business lack of "proper" IT processes, because at the end every company is unique and requires particular measures to be "fully" onboarded to IT based improvements like that. | |
| ▲ | delaminator 2 hours ago | parent | prev | next [-] | | You still need to teach a 2020s employee how to use Claude. - protect yourself from data loss / secret leaks
- what it can and can't do
- trust issues & hallucinations
- Can't just enable Claude for Excel and expect people to become Excel wizards. | |
| ▲ | Zardoz84 4 hours ago | parent | prev | next [-] | | And nobody talks that the "20 bucks per worker" it's selling it at loss. I'm waiting to see when they put a price that expects to generate some net income... | |
| ▲ | latchkey 8 hours ago | parent | prev [-] | | Like Uber/Airbnb in early days, this is heavily subsidized. | | |
|
|
| ▲ | _aavaa_ 8 hours ago | parent | prev | next [-] |
| For more on this exact topic and an answer to Solow’s Paradox, see, the excellent, The Dynamo and the Computer by Paul David [0]. [0]: https://www.almendron.com/tribuna/wp-content/uploads/2018/03... |
| |
| ▲ | gsf_emergency_6 7 hours ago | parent [-] | | Stanford prof rebutts David's idea[0] that it's difficult to extract productivity from the data https://www.nber.org/system/files/working_papers/w25148/w251... I don't agree that real GDP measures what he thinks it measures, but he opines >Data released this week offers a striking corrective to the narrative that AI has yet to have an impact on the US economy as a whole. While initial reports suggested a year of steady labour expansion in the US, the new figures reveal that total payroll growth was revised downward by approximately 403,000 jobs. Crucially, this downward revision occurred while real GDP remained robust, including a 3.7 per cent growth rate in the fourth quarter. This decoupling — maintaining high output with significantly lower labour input — is the hallmark of productivity growth. https://www.ft.com/content/4b51d0b4-bbfe-4f05-b50a-1d485d419... [0] on the basis that IT and AI are not general technologies in the mold of the dynamo, keyword "intangibles", see section 4 p21, A method to measure intangibles |
|
|
| ▲ | unkulunkulu 2 hours ago | parent | prev | next [-] |
| Ok, this article inspired some positivity in my view. Here comes, of course a disclaimer that this is just "wishful thinking", but still. So we are in the process of "adapting a technology". Welcome, keep calm, observe, don't be ashamed to feel emotions like fear, excitement, anger and all else. While adapting, we learn how to use it better and better. At first, we try "do all the work for me", then "ok, that was bad, plan what you would do, good, adjust, ok do it like this" etc etc. A couple of years into the future this knowledge is just "passed on". If productivity grew and we "figured out how to get more out of the universe", then no jobs had to be lost, just readapted. And "investors" get happy not by "replacing workers", but by "reaping win-win rewards" from the universe at large. There are dangers of course, like "maybe this is truly a huge win-win, but some loses can be hidden, like ecology", but "I hope there are people really addressing these problems and this win-win will help them be more productive as well". |
|
| ▲ | whynotminot 6 hours ago | parent | prev | next [-] |
| It’s also pretty wild to me how people still don’t really even know how to use it. On hacker news, a very tech literate place, I see people thinking modern AI models can’t generate working code. The other day in real life I was talking to a friend of mine about ChatGPT. They didn’t know you needed to turn on “thinking” to get higher quality results. This is a technical person who has worked at Amazon. You can’t expect revolutionary impact while people are still learning how to even use the thing. We’re so early. |
| |
| ▲ | overgard 6 hours ago | parent | next [-] | | I don't think "results don't match promises" is the same as "not knowing how to use it". I've been using Claude and OpenAI's latest models for the past two weeks now (probably moving at about 1000 lines of code a day, which is what I can comfortably review), and it makes subtle hard-to-find mistakes all over the place. Or it just misunderstands well known design patterns, or does something bone headed. I'm fine with this! But that's because I'm asking it to write code that I could write myself, and I'm actually reading it. This whole "it can build a whole company for me and I don't even look at it!" is overhype. | | |
| ▲ | XenophileJKO an hour ago | parent | next [-] | | If you know good architecture and you are testing as you go, I would say, it is probably pretty damn close to being able to build a company without looking at the code. Not without "risk" but definitely doable and plausible. My current project that I started this weekend is a rust client server game with the client compiled into web assembly. I do these projects without reading the code at all as a way to gauge what I can possibly do with AI without reading code, purely operating as a PM with technical intuition and architectural opinions. So far Opus 4.6 has been capable of building it all out. I have to catch issues and I have asked it for refactoring analysis to see if it could optimize the file structure/components, but I haven't read the code at all. At work I certainly read all the code. But would recommend people try to build something non trivial without looking at the code. It does take skill though, so maybe start small and build up the intuition on how they have issues, etc. I think you'll be surprised how much your technical intuition can scale even when you are not looking at the code. | | | |
| ▲ | scoopdewoop 5 hours ago | parent | prev | next [-] | | Prompting LLMs for code simply takes more than a couple of weeks to learn. It takes time to get an intuition for the kinds of problems they've seen in pre-training, what environments it faced in RL, and what kind of bizarre biases and blindspots it has. Learning to google was hard, learning to use other peoples libraries was hard, and its on par with those skills at least. If there is a well known design pattern you know, thats a great thing to shout out. Knowing what to add to the context takes time and taste. If you are asking for pieces so large that you can't trust them, ask for smaller pieces and their composition. Its a force multiplier, and your taste for abstractions as a programmer is one of the factors. In early usenet/forum days, the XY problem described users asking for implementation details of their X solution to Y problem, rather than asking how to solve Y. In llm prompting, people fall into the opposite. They have an X implementation they want to see, and rather than ask for it, they describe the Y problem and expect the LLM to arrive at the same X solution. Just ask for the implementation you want. Asking bots to ask bots seems to be another skill as well. | |
| ▲ | vidarh 3 hours ago | parent | prev [-] | | Do you use an agent harness to have it review code for you before you do? If not, you don't know how to use it efficiently. A large part of using AI efficiently is to significantly lower that review burden by having it do far more of the verification and cleanup itself before you even look at it. |
| |
| ▲ | politelemon 6 hours ago | parent | prev | next [-] | | You are assuming that we all work on the same tasks and should have exactly the same experience with it, which is it course far from the truth. It's probably best to start with that base assumption and work on the implications from there. As for the last example, for all the money being spent on this area, if someone is expected to perform a workflow based on the kind of question they're supposed to ask, that's a failure in the packaging and discoverability aspect of the product, the leaky abstraction only helps some of us who know why it's there. | |
| ▲ | harrall 6 hours ago | parent | prev | next [-] | | I’ve been helping normal people at work use AI and there’s two groups that are really struggling: 1. People who only think of using AI in very specific scenarios. They don’t know when you use it outside of the obvious “to write code” situations and they don’t really use AI effectively and get deflated when AI outputs the occasional garbage. They think “isn’t AI supposed to be good at writing code?” 2. People who let AI do all the thinking. Sometimes they’ll use AI to do everything and you have to tell them to throw it all away because it makes no sense. These people also tend to dump analyses straight from AI into Slack because they lack the tools to verify if a given analysis is correct. To be honest, I help them by teaching them fairly rigid workflows like “you can use AI if you are in this specific situation.” I think most people will only pick up tools effectively if there is a clear template. It’s basically on-the-job training. | |
| ▲ | mrtksn 6 hours ago | parent | prev | next [-] | | In a WhatsApp group full of doctors, managers, journalist and engineers (including software) in age of 30-60 I asked if anyone heard of openclaw and only 3 people heard of it from influencers, none used it. But from my social feed the impression was that it is taking over the world:) I asked it because I am building something similar since some tome and I thought its over they were faster than me but as it appears there’s no real adoption yet. Maybe there will be some once they release it as part of ChatGPT but even then it looks like too early as actually few people are using the more advanced tools. It’s definitely in very early stage. It appears that so far the mainstream success in AI is limited to slop generation and even that is actually small number of people generating huge amounts of slop. | | |
| ▲ | wiseowise 5 hours ago | parent | next [-] | | > I asked if anyone heard of twitter vaporware and only 3 people heard of it from influencers, none used it. Shocking results, I say! | | |
| ▲ | KellyCriterion 5 hours ago | parent [-] | | No, these people ("managers, engineers" etc.) do just not work in tech & IT but in other fields and they do not read tech news in your country etc. Most people are just "not that deep in there" as most people on HN. | | |
| ▲ | stackbutterflow an hour ago | parent | next [-] | | I spend between 1 and 2h a day on hn and I barely know what openclaw is. I've seen it mentioned once or twice and checked their website but that's all. If one lets AI FOMO since the release of chatgpt drive them they'd be glued to their screen 24/7. | |
| ▲ | wiseowise 3 hours ago | parent | prev [-] | | > “Tech news” A guy attached Claude to his socials, groundbreaking tech. | | |
| ▲ | KellyCriterion 2 hours ago | parent [-] | | Once I was working for a consulting & development company; they were trying to enter sector ABC by stuffing up a team of people, so I was told, who had interest in sector ABC stuff and want to do some projects there. While they were deep in software development in general, no body of them read any of the essential/required daily industrial news (also not that one related to doing software development in sector ABC) :-) So no, even people somehow attached to a topic are not necessarily somehow deeper involved. |
|
|
| |
| ▲ | alephnerd 6 hours ago | parent | prev [-] | | > I asked it because I am building something similar since some tome and I thought its over they were faster than me If you have been working on a usecase similar to OpenClaw for sometime now I'd actually say you are in a great position to start raising now. Being first to market is not a significant moat in most cases. Few people want to invest in the first company in a category - it's too risky. If there are a couple of other early players then the risk profile has been reduced. That said, you NEED to concentrate on GTM - technology is commodified, distribution is not. > It appears that so far the mainstream success in AI is limited to slop generation and even that is actually small number of people generating huge amounts of slop The growth of AI slop has been exponential, but the application of agents for domain specific usecases has been decently successful. The biggest reason you don't hear about it on HN is because domain-specific applications are not well known on HN, and most enterprises are not publicizing the fact that they are using these tools internally. Furthermore, almost anyone who is shipping something with actual enterprise usage is under fairly onerous NDAs right now and every company has someone monitoring HN like a hawk. | | |
| ▲ | mrtksn 5 hours ago | parent | next [-] | | Do you think that it is a good idea to release it first on iOS, announce on HN and Producthunt? How would you do? On my app the tech is based on running agent generated code on JavaScriptCore to do things like OpenClaw, I’m wrapping the JS engine with the missing functionality like networking, file access and database access so I believe I will not have a problem with releasing it on Apple AppStore as I use their native stack. Then since this stack is also OS, I’m making a version that will run on Linux, the idea being users develops their solution on their device(iOS&Mac currently) see it working and and then deploys on a server with a tap of a button, so it keeps running. | | |
| ▲ | alephnerd 5 hours ago | parent [-] | | Who's your persona? How are you pricing and packaging? Who is your buyer? Are you D2C? Consumer? Replacing EAs? Replacing Project Managers? ... You need to answer these questions in order to decide whether a Show HN makes sense versus a much more targeted launch. If you do not know how to answer these questions you need to find a cofounder asap. Technology is commodified. GTM, sales, and packaging is what turns technology into products. Building and selling and fundraising as 1 person is a one-way ticket to burnout, which only makes you and your product less attractive. I also highly recommend chatting with your network to understand common types of problems. Once you've identified a couple classes of problems and personas for whom your story resonates, then you can decide what approach to take. Best of luck! | | |
| ▲ | mrtksn 5 hours ago | parent [-] | | The persona is, someone who knows what are they doing but need someone to actually automate their work routine. I.e. maybe it’s a crypto trader that makes decisions on signals interpretation so they can create a trading bot that executes on their method. Maybe its a compliance who needs automate some routine like checking details further when some conditions arise. Or maybe a social media manager that needs to moderate their channels.Maybe someone who needs a tool for monitoring HN that specific way? Thanks for the advice! I’m at a stage where I want to have such tool and see who else wants it. Not sure yet about it’s viability as a business and what is the exact market. Maybe I will find out by putting it into the wild and that’s why I consider to release it as a mobile app first. | | |
| ▲ | wongarsu an hour ago | parent [-] | | That persona still sounds too generic, too unfocused. But even with that persona, it should already answer your question whether posting on HN and producthunt should be a core part of your strategy. Not a lot of social media managers or compliance people around here. And even for crypto traders there are better places to pitch products to them |
|
|
| |
| ▲ | walterbell 5 hours ago | parent | prev [-] | | > every company has someone monitoring HN like a hawk. Monitoring specific user accounts or keywords? Is this typically done by a social media reputation management service? |
|
| |
| ▲ | bigbuppo 5 hours ago | parent | prev | next [-] | | And it will get worse once the UX people get ahold of it. | | |
| ▲ | scrubs 5 hours ago | parent [-] | | You got that right . .. imagine AI making more keyboard shortcuts, "helping" wayland move off X more so, new window transistions, overhauling htmx ... it'll be hell+ on earth. | | |
| ▲ | alternatex 4 hours ago | parent [-] | | We can indeed only imagine. For now, AI has been a curse for open source projects. |
|
| |
| ▲ | KellyCriterion 5 hours ago | parent | prev | next [-] | | A neighbour of me has a PhD and is working in research at a hospital. He is super smart. Last time he said: "yes yes I know about ChatGPT, but I do not use it at work or home." Therefore, most people wont even know about Gemini, Grok or even Claude. | |
| ▲ | tstrimple 6 hours ago | parent | prev | next [-] | | > On hacker news, a very tech literate place I think this is the prior you should investigate. That may be what HN used to be. But it's been a long time since it has been an active reality. You can still see actual expert opinions on HN, but they are the minority more and more. | | |
| ▲ | alephnerd 6 hours ago | parent [-] | | I think one longtime HN user (Karrot_Kream I think) pinpointed the change in HN discourse to sometime in mid 2022 to early 2023 when the rate of new users spiked to 40k per month and remained at that elevated rate. From personal experience, I've also noticed that some of the most toxic discourse and responses I've received on this platform are overwhelmingly from post-2022 users. |
| |
| ▲ | slopinthebag 6 hours ago | parent | prev [-] | | > I see people thinking modern AI models can’t generate working code. Really? Can you show any examples of someone claiming AI models cannot generate working code? I haven't seen anyone make that claim in years, even from the most skeptical critics. | | |
| ▲ | autoexec 5 hours ago | parent | next [-] | | I've seen it said plenty of the times that the code might work eventually (after several cycles of prompting and testing), but even then the code you get might not be something you'd want to maintain, and it might contain bugs and security issues that don't (at least initially) seem to impact its ability to do whatever it was written to do but which could cause problems later. | | | |
| ▲ | zelphirkalt an hour ago | parent | prev | next [-] | | Depends what they mean. Generate working code all the time or after going a few iterations of trying and promoting? It can very easily happen, that an LLM generates something that is a straight error, because it hallucinates some keyword argument or something like that, which doesn't actually exist. Only happened to me yesterday. So going from that, no, they are still not able to generate working code all the time. Especially, when the basis is a shoddy-made library itself, that is simply missing something required. | |
| ▲ | IshKebab 7 minutes ago | parent | prev | next [-] | | I'll claim it. They can't generate working code for the things I am working on. They seem to be too complex or in languages that are too niche. They can do a tolerable job with super popular /simple things like web dev and Python. It really depends on what you're doing. | |
| ▲ | KellyCriterion 5 hours ago | parent | prev | next [-] | | Scroll up a few comments where someone said Claude is generating errors over and over again and that Claude cant work according to code guidelines etc :-)) | |
| ▲ | dangus 6 hours ago | parent | prev [-] | | And really the problem isn’t that it can’t make working code, the problem is that it’ll never get the kind of context that is in your brain. I started working today on a project I hadn’t touched in a while but I now needed to as it was involved in an incident where I needed to address some shortcomings. I knew the fix I needed to do but I went about my usual AI assisted workflow because of course I’m lazy the last thing I want to do is interrupt my normal work to fix this stupid problem. The AI doesn’t know anything about the full scope of all the things in my head about my company’s environment and the information I need to convey to it. I can give it a lot of instructions but it’s impossible to write out everything in my head across multiple systems. The AI did write working code, but despite writing the code way faster than me, it made small but critical mistakes that I wouldn’t have made on my first draft. For example, it just added in a command flag that I knew that it didn’t need, and it actually probably should have known it, too. Basically it changed a line of code that it didn’t need to touch. It also didn’t realize that the curled URL was going to redirect so we needed an -L flag. Maybe it should have but my brain knew it already. It also misinterpreted some changes in direction that a human never would have. It confused my local repository for the remote one because I originally thought I was going to set a mirror, but I changed plans and used a manual package upload to curl from. So it out the remote URL in some places where the local one should have been. Finally, it seems to have just created some strange text gore while editing the readme where it deleted existing content for seemingly no reason other than some kind of readline snafu. So yes it produced very fast great code that would have taken me way longer to do, but I had to go back and consume a very similar amount of time to fix so many things that I might as well have just done it manually. But hey I’m glad my company is paying $XX/month for my lazy workday machine. | | |
| ▲ | KellyCriterion 5 hours ago | parent [-] | | >>The AI doesn’t know anything about the full scope of all the things in my head about my company’s environment and the information I need to convey to it.<< This is your problem: How should it know if you do not provide it? Use Claude - in the pro version you can submit files for each project which are setting the context: This can be files, source code, SQL scripts, screenshots whatever - then the output will be based on your context given by providing these files. |
|
|
|
|
| ▲ | TimByte an hour ago | parent | prev | next [-] |
| Yet with IT, the bottleneck was largely technical and capital-related, whereas with AI it feels more organizational and cognitive |
|
| ▲ | masteruvpuppetz 4 hours ago | parent | prev | next [-] |
| An old office colleague used to tell us there was a time when he'd print a report prepared with Lotus123 (Ancient Excel) and their boss would verify the calculations on a calculator saying computers are not reliable. :o |
|
| ▲ | matsemann 2 hours ago | parent | prev | next [-] |
| Is this like the hotels first jumping on the wifi bandwagon? Spent lots of money up front for expensive tech. Years later, anyone could buy a cheap router and set up, so every hotel had wifi. But the original high-end hotels that were first out with wifi and paid much for it, has the worst and old wifi and charge users for it, still trying to recoup the costs. |
|
| ▲ | gsf_emergency_6 8 hours ago | parent | prev | next [-] |
| Fwiw fortune had another article this week saying this J-curve of "General Technology" is showing up in the latest BLS data https://fortune.com/2026/02/15/ai-productivity-liftoff-doubl... Source of the Stanford-approved opinion:
https://www.ft.com/content/4b51d0b4-bbfe-4f05-b50a-1d485d419... https://www.apolloacademy.com/waiting-for-the-ai-j-curve/ |
|
| ▲ | __jf__ 4 hours ago | parent | prev | next [-] |
| Paul Strassmann wrote a book in 1990 called "Business Value of Computers" that showed that it matters where money on computers is spent. Only firms that spent it on their core business processes showed increased revenues whereas the ones that spent it on peripheral business processes didn't. |
| |
| ▲ | Underqualified an hour ago | parent [-] | | This is my feeling about both IT and AI. It enables companies to do a lot of things which don't really bring value.
One of the biggest use case for AI in the company I work for now is powerBI report generation. Fine, but a couple of years ago we didn't even have all these graphs and reports. I'm not sure they bring actual value, since I see decisions still being made mostly on intuition. |
|
|
| ▲ | heresie-dabord 7 hours ago | parent | prev | next [-] |
| > It wasn't until the mid-to-late 1990's that information technology finally started to show clear benefit to the economy overall. The 1990s boom was in large part due to connectivity -- millions[1] of computers joined the Internet. [1] _ In the 1990s. Today, there are billions of devices connected, most of them Android devices. |
| |
|
| ▲ | overgard 6 hours ago | parent | prev | next [-] |
| The coding tools are not hard to pick up. Agent chat and autocomplete in IDE's are braindead simple, and even TUI's like Claude are extremely easy to pickup (I think it took me a day?) And despite what the vibers like to pretend, learning to prompt them isn't that hard either. Or, let me clarify, if you know how to code, and you know how you want something coded, prompting them isn't that hard. I can't imagine it'll take that long for an impact to be seen, if there is a major impact to be seen. I think it's more likely that people "feel" more productive, and/or we're measuring bad things (lines of code is an awful way to measure productivity -- especially considering that these agents duplicate code all the time so bloat is a given unless you actively work to recombine things and create new abstractions) |
| |
| ▲ | kylebyte 6 hours ago | parent [-] | | It reminds me a lot of adderall's effect on people without ADHD. A pretty universal feeling that it's making you smarter, paired with no measurable increase in test scores. | | |
| ▲ | overgard 6 hours ago | parent [-] | | That's a good analogy. I've never done stimulants, but from what I've heard about them they make people very active but that isn't the same as productive. |
|
|
|
| ▲ | ozgrakkurt 8 hours ago | parent | prev | next [-] |
| I don’t think LLMs are similar to computers in terms of productivity boost |
|
| ▲ | Waterluvian 6 hours ago | parent | prev | next [-] |
| Wow I didn’t realize that. But I always thought it. I was bewildered that anyone got any real value out of any of that pre-VisiCalc (or even VisiCalc) computer tech for business. It all looked kinda clumsy. |
| |
| ▲ | KellyCriterion 5 hours ago | parent [-] | | (pre) VisiCalc:
You have to understand that the primary users (accountants etc.) do not care about how a thing looks in their working process: If a tool helps them, they will use it even if its ugly according to aesthetical frontend questions :-) (Think about this old black/white or green mainframe screens - horrible looking but it gets their job done) |
|
|
| ▲ | kamaal 9 hours ago | parent | prev | next [-] |
| One part of the system moving fast doesn't change the speed of the system all that much. The thing to note is, verifying if something got done is harder and takes time in the same ballpark as doing the work. If people are serious about AI productivity, lets start by addressing how we can verify program correctness quickly. Everything else is just a Ferrari between two traffic red lights. |
| |
| ▲ | hawaiianbrah 5 hours ago | parent [-] | | Really? I disagree that verifying is as hard as doing the work yourself. It’s like P != NP. |
|
|
| ▲ | joering2 7 hours ago | parent | prev | next [-] |
| > And so we should expect AI to look the same Is that somewhat substantiated assumption? I recall learning on University in 2001 the history of AI and that initial frameworks were written in 70's and that prediction was we will reach human-like intelligence by 2000. Just because Sama came up with this somewhat breakthrough of an AI, it doesn't mean that equal improvement leaps will be done on a monthly/annual basis going forward. We may as well not make another huge leaps or reach what some say human intelligence level in 10 years or so. |
|
| ▲ | arisAlexis 2 hours ago | parent | prev | next [-] |
| Only it's much more exponential |
|
| ▲ | globular-toast 3 hours ago | parent | prev | next [-] |
| If things like computer-aided design and improved supply chain management, for example, make manufactured goods last longer and cause less waste, I would expect IT to cause productivity to go down. I drive a 15 year old car and use a 12 year old PC. It's a good thing that productivity goes down, or stays the same. |
|
| ▲ | killingtime74 7 hours ago | parent | prev | next [-] |
| productivity may rise with time, and costs may come down. The money is already spent |
|
| ▲ | calvinmorrison 8 hours ago | parent | prev | next [-] |
| > it's helping lots of people, but it's also costing an extraordinary amount of money Is it fair to say that wall street is betting America's collective pensions on AI... |
| |
| ▲ | autoexec 5 hours ago | parent | next [-] | | They're betting a lot more than that, but since all their chips are externalities they don't care. | |
| ▲ | HWR_14 7 hours ago | parent | prev [-] | | Very few people have pensions anymore. People now direct their own retirement funds. | | |
| ▲ | noddingham 7 hours ago | parent [-] | | That's what he was saying. Wall Street (the stock market) are people's "pensions" now because everyone has a 401k or equivalent so their retirement is tied to the market. Thus, these companies are betting America's collective retirement on AI... |
|
|
|
| ▲ | kittbuilds 2 hours ago | parent | prev | next [-] |
| [dead] |
|
| ▲ | lawstkawz 8 hours ago | parent | prev [-] |
| [dead] |