| ▲ | America's top companies keep talking about AI – but can't explain the upsides(ft.com) |
| 130 points by 1vuio0pswjnm7 a day ago | 102 comments |
| |
|
| ▲ | rebeccaskinner a day ago | parent | next [-] |
| Looking at my own use of AI, and at how I see other engineers use it, it often feels like two steps forward and two steps back, and overall not a lot of real progress yet. I see people using agents to develop features, but the amount of time they spend to actually make the agent do the work usually outweighs the time they’d have spent just building the feature themselves. I see people vibe coding their way to working features, but when the LLM gets stuck it takes long enough for even a good developer to realize it and re-engage their critical thinking that it can wipe out the time savings. Having an LLM do code and documentation review seems to usually be a net positive to quality, but that’s hard to sell as a benefit and most people seem to feel like just using the LLM to review things means they aren’t using it enough. Even for engineers there are a lot of non-engineering benefits in companies that use LLMs heavily for things like searching email, ticketing systems, documentation sources, corporate policies, etc. A lot of that could have been done with traditional search methods if different systems had provided better standardized methods of indexing and searching data, but they never did and now LLMs are the best way to plug an interoperability gap that had been a huge problem for a long time. My guess is that, like a lot of other technology driven transformations in how work gets done, AI is going to be a big win in the long term, but the win is going to come on gradually, take ongoing investment, and ultimately be the cumulative result of a lot of small improvements in efficiency across a huge number of processes rather than a single big win. |
| |
| ▲ | ernst_klim a day ago | parent | next [-] | | > the amount of time they spend to actually make the agent do the work usually outweighs the time they’d have spent just building the feature themselves Exactly my experience. I feel like LLMs have potential as Expert Systems/Smart websearch, but not as a generative tool, neither for code nor for text. You spend more time understanding stuff than writing code, and you need to understand what you commit with or without LLM. But writing code is easier that reviewing, and understanding by doing is easier than understanding by reviewing (bc you get one particular thing at the time and don't have to understand the whole picture at once). So I have a feeling that agents do even have negative impact. | | |
| ▲ | spwa4 11 hours ago | parent [-] | | The reason companies, or at least sales and marketing, are so incredibly after AI is that it can raise response rates on spam, and on ads, by "Hyper-personalizing" them by actually reading the social media accounts of the people looking at the ads and making ads directly based on that. |
| |
| ▲ | breakpointalpha 16 hours ago | parent | prev | next [-] | | Your millage may vary, but I just got Cursor (using Claude 4 Sonnet) to one shot a sequence of bash scripts that cleanup stale AWS resources. I pasted the Jira ticket description that I wrote, with a few examples and the script works perfectly. Saved me a few hours of bash writing and debugging because I can read bash, but not write it well. It seems that the smaller the task and the more tightly defined the input and output, the better the LLMs are at one-shotting. | | |
| ▲ | washadjeffmad 14 hours ago | parent | next [-] | | Same. I interface with a team who refuses to conduct business in anything other than Excel, and because of dated corporate mindshare, their management sees them more as wizards instead of the odd ones out. "They're on top of it! They always email me the new file when they make changes and approve my access requests quickly." There are limits to my stubbornness, and my first use of LLMs for coding assistance was to ask for help figuring out how to Excel, after a mere three decades of avoidance. After engaging and learning more about their challenges, it turned out one of their "data feeds" was actually them manually copy/pasting into a web form with a broken batch import that they'd give up on submitting project requests for, which I quietly fixed so they got to retain their turnaround while they planned some other changes. Ultimately nothing grand, but I would never have bothered if I'd had to wade through the usual sort of learning resources available or ask another person. Being able to transfer and translate higher level literacy, though, is right up my alley. | |
| ▲ | rebeccaskinner 15 hours ago | parent | prev | next [-] | | I’ve had similar experiences where AI saved me a ton of time when I knew what I wanted and understood the language or library well enough to review but poorly enough that I’d gave been slow writing it because I’d have spent a lot of time looking things up. I’ve also had experiences where I started out well but the AI got confused, hallucinated, or otherwise got stuck. At least for me those cases have turned pathological because it always _feels_ like just one or two more tweaks to the prompt, a little cleanup, and you’ll be done, but you can end up far down that path before you realize that you need to step back and either write the thing yourself or, at the very least, be methodical enough with the AI that you can get it to help you debug the issue. The latter case happens maybe 20% of the time for me, but the cost is high enough that it erases most of the time savings I’ve seen in the happy path scenario. It’s theoretically easy to avoid by just being more thoughtful and active as a reviewer, but that reduces the efficiency gain in the happy path. More importantly, I think it’s hard to do for the same reason partially self driving cars are dangerous: humans are bad at paying attention well in “mostly safe and boring, occasionally disastrous” type settings. My guess is that in the end we’ll see less of the problematic cases. In part because AI improves, and in part because we’ll develop better intuition for when we’ve stepped onto the unproductive path. I think a lot of it too will also be that we adopt ways of working that minimize the pathological “lost all day to weird LLM issues” problems by trying to keep humans in the loop more deeply engaged. That will necessarily also reduce the maximum size of the wins we get, but we’ll come away with a net positive gain in productivity. | |
| ▲ | jdiff 11 hours ago | parent | prev [-] | | That's a dangerous game to play with Bash, I'm not sure if there's another language more loaded with footguns than that. |
| |
| ▲ | DanielHB 19 hours ago | parent | prev | next [-] | | I have found out that the limit of LLMs good use of coding abilities is basically what can be reasonably done as a single copy-paste. Usually only individual functions. I basically use it for google on steroids for obscure topics, for simple stuff I still use normal search engines. | |
| ▲ | insane_dreamer 12 hours ago | parent | prev [-] | | I've found it to be a significant productivity boost but only for a small subset of problems. (Things like bash scripts, which are tedious to write and I'm not that great at bash. Or fixing small bugs in a React app, a framework I'm not well versed in. But even then I have to keep my thinking cap on so it doesn't go off the rails.) It works best when the target is small and easily testable (without the LLM being able to fudge the tests, which it will do.) For many other tasks it's like training an intern, which is worth it if the intern is going to grow and take on more responsibility and learn to do things correctly. But since the LLM doesn't learn from its mistakes, it's not a clear and worthwhile investment. |
|
|
| ▲ | CyberMacGyver a day ago | parent | prev | next [-] |
| Our new CTO was remarking that our engineering teams AI spend is too low. I believe we have already committed a lot of money but only using 5% of the subscription. This is likely why there is a lot of push from the top. They have already committed the money now having to justify it. |
| |
| ▲ | hn_throwaway_99 a day ago | parent | next [-] | | > They have already committed the money now having to justify it. As someone who has been in senior engineering management, it's helpful to understand the real reason, and this is definitely not it. First, these AI subscriptions are usually month-to-month, and these days with the AI landscape changing so quickly, most companies would be reluctant to lock in a longer term even if there were a discount. So it's probably not hard to quickly cancel AI spend for SaaS products. Second, the vast majority of companies understand sunk cost fallacy. If they truly believed AI wouldn't be a net benefit, they wouldn't force people to use it just because they already paid for it. Salaries for engineers are a hell of a lot more than their AI costs. The main reason for the push from the top is probably because they believe companies that don't adopt AI strategies now and ensure their programmers are familiar with AI toolsets will be at a competitive disadvantage. Note they may even believe that today's AI systems may not be much of a net benefit, but they probably see the state of the art advancing quickly so that companies who take a wait-and-see approach will be late to the game when AI is a substantial productivity enhancer. I'm not at all saying you have to buy into this "FOMO rationale", but just saying "they already paid the money so that's why they want us to use it" feels like a bad excuse and just broadcasts a lack of understanding of how the vast majority of businesses work. | | |
| ▲ | empiko a day ago | parent | next [-] | | Agreed. I think that many companies force people to use AI in hopes that somebody will stumble upon a killer use case. They don't want competitors to get there first. | |
| ▲ | lelanthran a day ago | parent | prev | next [-] | | > but they probably see the state of the art advancing quickly so that companies who take a wait-and-see approach will be late to the game when AI is a substantial productivity enhancer. This makes no sense for coding subscriptions. Just how far behind can you be in skills by taking a wait and see position? After all, it's not like this specific product needs more than a single day for the user to get up to speed. | | |
| ▲ | dasil003 16 hours ago | parent [-] | | I disagree, agentic coding is a very different skill set. When you are talking about maintaining massive corporate code bases it’s not a instant-gratification activity like vibe coding a small prototype, a lot of guardrails and frankly a new level of engagement in code review becomes necessary. Ultimately I think this will change the job enough that many folks won’t make the transition. | | |
| ▲ | hn_throwaway_99 5 hours ago | parent [-] | | I totally agree - yes, the current AI tools will definitely change, but the difference in AI-tooling specifics is much smaller than the difference between "no AI assistance at all" and an agentic-AI heavy coding process. And I say this as someone who didn't make the transition after 25 years as a software engineer. While I get a lot of value out of AI, I felt it largely changed my job from "mostly author" to "mostly editor", and I just didn't enjoy it nearly as much, so I got out of software altogether and went to violin making school. |
|
| |
| ▲ | pseudalopex 9 hours ago | parent | prev | next [-] | | It's incomplete but not false universally. Politics is part of how businesses work. Many companies which adopted AI now expected results now. People who promised results now have reputations on the line. Incentives influence beliefs. | |
| ▲ | rsynnott 20 hours ago | parent | prev | next [-] | | > Note they may even believe that today's AI systems may not be much of a net benefit, but they probably see the state of the art advancing quickly so that companies who take a wait-and-see approach will be late to the game when AI is a substantial productivity enhancer. This doesn't make a huge amount of sense, because the stuff is changing so quickly anyway. It's far from clear that, in the hypothetical future where this stuff is net-useful in five years, experience with _today's_ tools will be of any real use at all. | |
| ▲ | nelox a day ago | parent | prev | next [-] | | > The main reason for the push from the top is probably because they believe companies that don't adopt AI strategies now and ensure their programmers are familiar with AI toolsets will be at a competitive disadvantage. Note they may even believe that today's AI systems may not be much of a net benefit, but they probably see the state of the art advancing quickly so that companies who take a wait-and-see approach will be late to the game when AI is a substantial productivity enhancer. Yes, this is the correct answer. | |
| ▲ | watwut a day ago | parent | prev | next [-] | | Companies do not necessarily understand sunk cost fallacy. > ensure their programmers are familiar with AI toolsets will be at a competitive disadvantage But more importantly, this is completely inconsistent with how banks approach any other programming tool or how they approach lifelong learning. They are 100% comfortable with people not learning on the job in just about any other situation. | | |
| ▲ | dijit a day ago | parent [-] | | yeah, I’ve been in so many companies where “sweetheart deals” force the use of some really shitty tech. Both when the money has been actually committed and when it’s usage based. I have found that companies are rarely rational and will not “leave money on the table” | | |
| ▲ | rightbyte 19 hours ago | parent [-] | | That is how the sausage is made. Ironically this is what democratic institutions like county admins etc are ridiculed for due to more transparency compared to private sector. |
|
| |
| ▲ | ajcp a day ago | parent | prev | next [-] | | > this is definitely not it. > is probably because I don't mean to be contrary, but these statements stand in opposition, so I'm not sure why you are so confidently weighing in on this. Also, while I'm sure you've "been in senior engineering management", it doesn't seem like you've been in an organization that doesn't do engineering as it's product offering. I think this article is addressing the 99% of companies that have some amount of engineers, but does not do engineering. That is to say: "My company does shoes. My senior leadership knows how to do shoes. I don't care about my engineering prowess, we do shoes. If someone says I can spend less on the thing that isn't my business (engineering) then yes, I want to do that." | | |
| ▲ | hn_throwaway_99 a day ago | parent [-] | | >> this is definitely not it. >> is probably because > I don't mean to be contrary, but these statements stand in opposition No, they don't. It's perfectly consistent to say one reason is certainly wrong without saying another much more likely reason is definitely right. | | |
| ▲ | ajcp an hour ago | parent [-] | | You are absolutely correct; that was an error in my logic. Apologies. |
|
| |
| ▲ | throwaway984393 a day ago | parent | prev | next [-] | | [dead] | |
| ▲ | sschnei8 a day ago | parent | prev [-] | | Do you have any data to backup the claim: “vast majority of companies understand suck cost fallacy.” I’m assuming you meant “sunk” not “suck”. Not familiar with the suck fallacy. | | |
| ▲ | viccis a day ago | parent [-] | | >I’m assuming you meant “sunk” not “suck”. Not familiar with the suck fallacy. There was no need to post this. |
|
| |
| ▲ | vjvjvjvjghv a day ago | parent | prev [-] | | Wish my company did this. I would love to learn more about AI but the company is too cheap to buy subscriptions | | |
| ▲ | foogazi a day ago | parent [-] | | Can you buy a subscription and see if it benefits you ? | | |
| ▲ | trenchpilgrim a day ago | parent [-] | | At my job this would get you disciplined for leaking proprietary data to an unapproved vendor. We have to buy AI from approved vendors that keep our data partitioned from training data. |
|
|
|
|
| ▲ | discordance a day ago | parent | prev | next [-] |
| This comes to mind: "MIT Media Lab/Project NANDA released a new report that found that 95% of investments in gen AI have produced zero returns" [0] Enterprise is way too cozy with the big cloud providers, who bought into it and sold it on so heavily. 0: https://fortune.com/2025/08/18/mit-report-95-percent-generat... |
| |
| ▲ | matwood a day ago | parent | next [-] | | I wonder people ever read what they link. > The core issue? Not the quality of the AI models, but the “learning gap” for both tools and organizations. While executives often blame regulation or model performance, MIT’s research points to flawed enterprise integration. Generic tools like ChatGPT excel for individuals because of their flexibility, but they stall in enterprise use since they don’t learn from or adapt to workflows, Challapally explained. The 95% isn't a knock on the AI tools, but that enterprises are bad at integration. Large enterprises being bad at integration is a story as old as time. IMO, reading beyond the headline, the report highlights the value of today's AI tools because they are leading to enterprises trying to integrate faster than they normally would. "AI tools found to be useful, but integration is hard like always" is a headline that would have gotten zero press. | | |
| ▲ | pseudalopex 11 hours ago | parent [-] | | > The 95% isn't a knock on the AI tools, but that enterprises are bad at integration. You could read this quote this way. But the report knocked the most common tools. The primary factor keeping organizations on the wrong side of the GenAI Divide is the learning gap, tools that don't learn, integrate poorly, or match workflows. Users prefer ChatGPT for simple tasks, but abandon it for mission-critical work due to its lack of memory. What's missing is systems that adapt, remember, and evolve, capabilities that define the difference between the two sides of the divide.[1] [1] https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Bus... |
| |
| ▲ | bawolff a day ago | parent | prev | next [-] | | If the theory is that 1% will be a unicorns that will make you a trillionaire, i think investors would be ok with that. The real question is do those unicorns exist or is it all worthless. | | |
| ▲ | orionblastar a day ago | parent [-] | | Have to pay the power bill for the data centers for GAI. Might not be profitable. |
| |
| ▲ | thenaturalist a day ago | parent | prev [-] | | Fun fact, the report was/ is so controversial, that the link to the NANDA paper linked in fortune has been put behind a Google Form you now need to complete prior to being able to access it. | | |
| ▲ | losteric a day ago | parent | next [-] | | Doubt the form has anything to do with how "controversial" it is. NANDA is using the paper's popularity to collect marketing data. | |
| ▲ | gamblor956 6 hours ago | parent | prev [-] | | Fun fact, it was always behind a form if you wanted to access it through the original/primary link. |
|
|
|
| ▲ | nelox a day ago | parent | prev | next [-] |
| The claim that big US companies “cannot explain the upsides” of AI is misleading. Large firms are cautious in regulatory filings because they must disclose risks, not hype. SEC rules force them to emphasise legal and security issues, so those filings naturally look defensive. Earnings calls, on the other hand, are overwhelmingly positive about AI. The suggestion that companies only adopt AI out of fear of missing out ignores the concrete examples already in place. Huntington Ingalls is using AI in battlefield decision tools, Zoetis in veterinary diagnostics, Caterpillar in energy systems, and Freeport-McMoran in mineral extraction. These are significant operational changes. It is also wrong to frame limited stock outperformance as proof that AI has no benefit. Stock prices reflect broader market conditions, not just adoption of a single technology. Early deployments rarely transform earnings instantly. The internet looked commercially underwhelming in the mid-1990s too, before business models matured. The article confuses the immaturity of current generative AI pilots with the broader potential of applied AI. Failures of workplace pilots usually result from integration challenges, not because the technology lacks value. The fact that 374 S&P 500 companies are openly discussing it shows the opposite of “no clear upside” — it shows wide strategic interest. |
| |
| ▲ | rsynnott 20 hours ago | parent | next [-] | | > The fact that 374 S&P 500 companies are openly discussing it shows the opposite of “no clear upside” — it shows wide strategic interest. There was a weird moment in the late noughties where seemingly every big consumer company was creating a presence in Second Life. There was clearly a lot of strategic interest... Second Life usage peaked in 2009 and never recovered, though it remains somewhat popular amongst furries. Bizarrely, this kind of happened _again_ with the very similar "metaverse" stuff a decade or so later, though it burned out somewhat quicker and never hit the same levels of farcical nonsense; I don't think any actual _countries_ opened embassies in "the metaverse", say (https://www.reuters.com/article/technology/sweden-first-to-o...). | |
| ▲ | julkali a day ago | parent | prev | next [-] | | The issue is that the examples you listed mostly rely on very specific machine learning tools (which are very much relevant and good use of this tech), while the term "AI" in layman terms is usually synonymous for LLMs. Mentioning the mid-1990s' internet boom is somewhat ironic imo, given what happened next. The question is whether "business models mature" with or without a market crash, given that the vast majority of ML money is provided for LLM efforts. | | |
| ▲ | comp_throw7 a day ago | parent [-] | | (You're responding to an LLM-generated comment, btw.) | | |
| ▲ | nelox 21 hours ago | parent | next [-] | | The comment was definitely not LLM generated. However, I certainly did use search for help in sourcing information for it. Some of those searches offered AI generated results, which I cross-referenced, before using to write the comment myself.
That in no way is the same as “an LLM-generated comment”. | | |
| ▲ | dnissley 20 hours ago | parent | next [-] | | It's popular now to level these accusations at text that contains emdashes. | | |
| ▲ | fzzzy 20 hours ago | parent [-] | | An llm would “know” not to put spaces around an em dash. An en dash should have spaces. | | |
| ▲ | jdiff 10 hours ago | parent [-] | | I've actually seen LLMs put spaces around em dashes more often than not lately. I've made accusations of humanity only to find that the comment I was replying to was wholly generated. And I asked, there was no explicit instruction to misuse the em dashes to enhance apparent humanity. |
|
| |
| ▲ | lossolo 19 hours ago | parent | prev [-] | | The use of “ instead of ", two different types of hyphens/dash, specific wording and sentence construction are clear signs that the whole comment was produced by chatGPT. How much of it was actually yours (people sometimes just want LLM to rewrite their thoughts), we will never know but it's an output of an LLM. | | |
| ▲ | nelox 18 hours ago | parent [-] | | Well, I use an iPhone and “ is default on my keyboard. Tell me, why should I not use a hyphen for hyphenated words? I was schooled is British English where the spaced endash - is preferred. Shall I go on? | | |
| ▲ | lossolo 10 hours ago | parent [-] | | I'm using ChatGPT daily to correct wording and I work on LLMs, construction and the wording in your comment is straight from ChatGPT. I looked at your other comments, and a lot of them seem to be LLM output. This one is an obvious example: https://news.ycombinator.com/item?id=44404524 And anyone can go back to the pre LLM era and see your comments on HN. You need to understand that ChatGPT has a unique style of writing and overuses certain words and sentence constructions that are statistically different from normal human writing. Rewriting things with an LLM is not a crime, so you don’t need to act like it is. |
|
|
| |
| ▲ | fragmede 12 hours ago | parent | prev [-] | | and you're responding to a comment where the LLM has been instructed to not to use emdashes. And I'm responding to a comment that was generated by an LLM that was instructed to complain about LLM generated content with a single sentence. At the end of the day, we're all stoichastic parrots. How about you respond to the substance of the comment and not whether or not there was an emdash. Unless you have no substance. |
|
| |
| ▲ | Frieren a day ago | parent | prev [-] | | > Huntington Ingalls is using AI in battlefield decision tools, Zoetis in veterinary diagnostics, Caterpillar in energy systems, and Freeport-McMoran in mineral extraction. But most AI push is for LLMs, and all the companies you talk about seem to be using other types of AI. > Failures of workplace pilots usually result from integration challenges, not because the technology lacks value. Bold claim. Toxic positivism seems to be too common in AI evangelists. > The fact that 374 S&P 500 companies are openly discussing it shows the opposite of “no clear upside” — it shows wide strategic interest. If the financial crisis taught me something is that if a company jumps of a bridge the rest will follow. Assuming that there must be some real value because capitalism is missing the main proposition of capitalism, companies will take stupid decisions and pay the price for it. |
|
|
| ▲ | jraby3 a day ago | parent | prev | next [-] |
| As a small business owner in a non tech business (60 employees, $40M revenue), AI is definitely worth $20/month but not as I anticipated. I thought we'd use it to reduce our graphics department but instead we've begun outsourcing designers to Colombia. What I actually use it for is to save time and legal costs. For example a client in bankruptcy owes us $20k. Not worth hiring an attorney to walk us through bankruptcy filings. But can easily ask ChatGPT to summarize legal notices and advise us what to do next as a creditor. |
| |
| ▲ | flohofwoe a day ago | parent | next [-] | | Which summarizes the one useful property of LLMs: a slightly better search engine which on top doesn't populate the first 5 result pages with advertisements - yet anyway ;) | | |
| ▲ | jordanb 19 hours ago | parent | next [-] | | No doubt in ten years chatgpt will mostly be telling you things it was paid to say. | |
| ▲ | Gud 20 hours ago | parent | prev [-] | | The saddest part is we used to have highly functional search engines two decades ago, where you would results from subject matter experts. Today it’s only the same SEO formatted crap without answer. I am working on a solution. |
| |
| ▲ | pjc50 21 hours ago | parent | prev [-] | | > Not worth hiring an attorney to walk us through bankruptcy filings The AI doesn't carry professional liability insurance, so this is about as good as asking one of the legal subreddits. It's probably fine in this case since the worst case is not getting the money that you were at risk of not getting anyway. |
|
|
| ▲ | vjvjvjvjghv a day ago | parent | prev | next [-] |
| This reminds me of the internet in 2000. Lots of companies doing .COM stuff but many didn’t understand what they were doing and why they were doing it. But in the end the internet was a huge game changer. I see the same with AI. There will be a lot of money wasted but in the end AI will be huge transformation. |
| |
| ▲ | marklubi 11 hours ago | parent | next [-] | | > This reminds me of the internet in 2000 The thing that changed it was smartphones (~7 years later). Suddenly, the internet was available everywhere and not just a thing for nerds. Not sure that AI is quite there yet, currently trying to identify what will be the catalyst that makes it seamless. | |
| ▲ | kilroy123 21 hours ago | parent | prev [-] | | I completely agree. I think the financial bubble will also burst soon. Doesn't mean it won't keep on slowly eating the world. |
|
|
| ▲ | mgh2 a day ago | parent | prev | next [-] |
| https://archive.is/133z6 |
| |
|
| ▲ | 01100011 a day ago | parent | prev | next [-] |
| AI isn't about what you are able to do with it. AI is about the fear of what your competitors can do with it. I said a couple years ago that the big companies would have trouble monetizing it, but they'd still be forced to spend for fear of becoming obsolete. |
|
| ▲ | sydbarrett74 a day ago | parent | prev | next [-] |
| AI provides cover to lay people off, or else commit constructive dismissal. |
| |
| ▲ | lotsofpulp a day ago | parent | next [-] | | Constructive dismissal and layoffs are mutually exclusive. https://en.wikipedia.org/wiki/Constructive_dismissal >In employment law, constructive dismissal occurs when an employee resigns due to the employer creating a hostile work environment. No employee is resigning when an employer tells the employee they are terminated due to AI replacing them. | | |
| ▲ | heavyset_go a day ago | parent | next [-] | | AI is what you tell the board/investors is the reason for layoffs and attrition. Layoffs and attrition happen for reasons that are not positive, AI provides a positive spin. | |
| ▲ | lmm a day ago | parent | prev | next [-] | | > No employee is resigning when an employer tells the employee they are terminated due to AI replacing them. No, but some are resigning when they're told their bonus is being cut because they didn't use enough AI. | |
| ▲ | TheDong a day ago | parent | prev [-] | | Using AI makes me want to resign from life, it removes all the fun and joy from coding. I absolutely will resign if my job becomes 100% generating and reviewing AI generated slop, having to review my coworker's AI slop has already made my job way less fun. | | |
| ▲ | bluefirebrand 15 hours ago | parent [-] | | Yeah, I'm feeling this. I'm almost 40 and I'm really not interested in continuing the AI slop treadmill that I'm seeing stretching before me I am my own agent in the world. I don't get any satisfaction from using AI to do my work and offloading my responsibility and my skills to a computer. Computers are tools for me to use, not partners or subordinates Strongly thinking about going back to school to retrain into something else aside from software. The only thing stopping me at the moment is that I think AI is making every industry and job similarly stupid and fruitless right now so changing lanes will still land me in the "AI pilot" career path What a shitty time to be alive. I used to love technology. Now I'm coming to loathe it. It is diminishing what it means to be human |
|
| |
| ▲ | zippyman55 a day ago | parent | prev [-] | | Agreed! The people who did not work hard but were kept employed ala “bullshit work” are being removed. | | |
| ▲ | bravetraveler a day ago | parent [-] | | Eh, I have plenty of "bullshit work". Only that, actually, for the foreseeable future. Building clusters six servers at a time... that last the order of weeks, appeasing "stakeholders" that are closer to steaks. Whole lot of empty movement and minds behind these 'investments'. FTE that amounts to contracted, disposed, labor to support The Hype. | | |
| ▲ | zippyman55 16 hours ago | parent [-] | | But that is still highly skilled BS work, and I would not call it BS work.
What i was referring to was the class of work individuals do that someone off the street can be trained in thirty minutes. Building a cluster is more challenging.
Not sure why I picked up three down votes. Must have touched a nerve! | | |
| ▲ | bravetraveler 15 hours ago | parent [-] | | It's pretty BS, trust me! The work is automated to the point that it takes more time to hand off/hand in. Very performative. We could get this to the point of taking people off the street/putting them to task... but instead, we've collectively found it more valuable to push the spreadsheet along a few cells at a time, together. Perhaps it's mistaken to localize the BS; it's shared. Soft and tends to spread. |
|
|
|
|
|
| ▲ | apexalpha a day ago | parent | prev | next [-] |
| Computers being able to digest vision, audio and other input into text and back has tremendous value. You can’t convince me otherwise, we just haven’t found a ‘killer app’ yet. |
| |
| ▲ | ozgung a day ago | parent [-] | | I believe this is the correct way of seeing things. We may not need a killer app though, since AI is not a platform but a core technology. It’s more about the evolution of IT infrastructure and SW systems. Non-tech companies/people don’t need to do anything really. AI will just come to them. |
|
|
| ▲ | kapone a day ago | parent | prev | next [-] |
| Well, as anecdotal data, have you folks noticed ads lately pushing Gemini/Claude/xx on both legacy media and online? If AI (and these products) is sooo great, why do these companies have to advertise to sell their wares? And Google and Microsoft are hellbent on pushing AI into everything. Even if users don't want it. Nope, we're gonna throw the kitchen sink at you and see if it sticks. In the non-tech world, nobody gives a shit about AI. People and businesses go about their daily lives without thinking about things like "Hmmm...maybe I could have prompted that LLM a different way..." |
|
| ▲ | nasmorn 19 hours ago | parent | prev | next [-] |
| The problem with AI is how confidently wrong it is. In Lisbon I uploaded a picture of myself on some church steps and asked ChatGPT where that is.
I came up with a place I was sure I’d never been. Then I asked if it could be part of some other place and it said sure it’s inside the main church. The pic was clearly outside. Next it gave me a random famous stair that is so clearly different a human could never be fooled.
Each of these lies were extremely elaborate citing sources and describing all the things that matched.
The only matching experience was with a taxi in Delhi some 20 years ago where the driver pretended to know where he was going and when I further questioned him he said the 40 story hotel I am looking for has been demolished 5 years after opening. At least he has monetary interest in lying to me so I enter his cab. |
|
| ▲ | firefoxd a day ago | parent | prev | next [-] |
| For most companies AI is a subscription service you sign up for. Because of great marketing campaigns, it has become a necessary tax. If you don't pay, the penalty is you lose value because it doesn't look like you are embracing the future. If you pay, well it's just a tax that you hope your employees will somehow benefit from. |
|
| ▲ | ksec a day ago | parent | prev | next [-] |
| If we ignore coding or tech industry for a min. Other companies Keep demanding new Report on certain things, and AI is doing that. Is it productive? probably not. But do execs loves it? Yes. In a non-start up, bureaucratic companies, these report are there to make cover ups, or basically to cover everyone's ass so no one is doing anything wrong because the report said so. |
|
| ▲ | 1vuio0pswjnm7 13 hours ago | parent | prev | next [-] |
| Whether it works or it doesn't, keep talking about "AI", keep speculating. Maintain the hype |
|
| ▲ | wg0 a day ago | parent | prev | next [-] |
| Sounds like blockchain all over. Reminds me of an essay from two product managers in AWS that talked to clients all over US and couldn't get any business to clearly articulate why they need blockchain. Note: AWS has a hosted blockchain that you can use. [1] PS: If anyone has read that essay, please do share the link. I can't really locate it but that's a wonderful read. [1]. https://aws.amazon.com/managed-blockchain/ |
| |
|
| ▲ | jandrewrogers a day ago | parent | prev | next [-] |
| For the type of work I typically do, AI is hopelessly terrible. Not too surprising because there is zero training data. |
| |
| ▲ | paulddraper a day ago | parent [-] | | So how do you learn? | | |
| ▲ | oblio a day ago | parent | next [-] | | Trial and error? | |
| ▲ | gambiting a day ago | parent | prev [-] | | I mean I'm in a similar situation in that I'm a games developer and no AI system has been trained on the details of PS5/Xbox/Switch development since those are heavily NDA'd and not available to the public. So I learn by reading docs which are available to me as a registered developer, but AI doesn't have that ability and it hasn't been trained on this. |
|
|
|
| ▲ | andyst a day ago | parent | prev | next [-] |
| the AI umbrella has been helpful to my BigCorp to justify more machine learning work, or discrete optimisation and scheduling problems agentic ai which is a huge buzz in enterprise feels more like workflow and rpa (again) and people misunderstanding that getting the happy flow working is only 20% of the job. |
|
| ▲ | outlore a day ago | parent | prev | next [-] |
| one of the great benefits of AI so far has been the push for more plain text documentation and opening up API access via MCP. Let's enjoy it while it lasts until we are forced back into walled gardens and micro transactions |
|
| ▲ | gethly a day ago | parent | prev | next [-] |
| Because AI is a financial bubble and it is the only thing holding up the entire US stock market. But the day of reckoning of near. |
| |
| ▲ | joebob42 a day ago | parent [-] | | Are you using it? I genuinely don't understand how people who are experimenting with the tool can feel this way | | |
| ▲ | stevedonovan a day ago | parent | next [-] | | It's not inconsistent to say there's a financial bubble and also genuinely think it's a new era for software development. There aren't enough programmers to justify the valuations and capex | |
| ▲ | Ianjit a day ago | parent | prev | next [-] | | Dot com was a financial bubble, but the internet was still very usefull. Financial markets can become (often are) dislocated from reality. | |
| ▲ | gethly 21 hours ago | parent | prev [-] | | I sometimes use grok. But not much. Your confusion is strange. I never said the tech is a bubble(it can be used today, although in very limited manner, compared to how it is being sold to the public), just the financial aspect of it. If you'd be more educated in investing, economics or geopolitics you'd understand what is going on. I am not being hyperbolic here. Even Altman admitted AI is a bubble. It's really no secret to anyone. But bubbles will be ridden no matter what, all the way up, until they pop. So knowing it is a bubble does not change much. We just know what we can expect once it pops. tl;dr I was merely answering the question the article proposes. |
|
|
|
| ▲ | fifteen1506 a day ago | parent | prev | next [-] |
| Big benefit from coding Agents: for things to work it better have documentation.
Which humans usually aren't given, so anything which forces documentation is good. |
|
| ▲ | cjs_ac a day ago | parent | prev | next [-] |
| Large language models are a deeply impressive technology, but they're not artificial general intelligences, because you need to supervise them. Like everything else that has been called 'artificial intelligence' since the 1950s, I think we'll find some niches that they're good for, and that'll be the end of the hype bubble. The hype does serve a purpose, though: it motivates people to try to find more possible uses for LLMs. However, as with all experiments, we should expect most of these attempts to fail. |
|
| ▲ | groby_b a day ago | parent | prev [-] |
| Simple fact: AI is extremely powerful, in the hands of experts who invested time in deeply understanding it, and in understanding how to actually use it well. Who are then willing to commit more time to build an actually sustainable solution. Alas, many members of the C suite do not exactly fit that description. They just have typed in a prompt or three, marveled that a computer can reply, and fantasize that it's basically a human replacement. There are going to be a lot of (figurative, incorporated) dead bodies on the floor. But there will also be a few winners who actually understood what they were doing, and the wins will be massive. Same as it was post dot-com. |
| |
| ▲ | SchemaLoad a day ago | parent | next [-] | | Something I've noticed is LLMs seem to be able to answer questions on everything, in quite a lot of detail. But I can't seem to get them to actually do anything useful, you basically have to hand hold them the entire way to the point they don't really add value. I'm sure there is plenty of research in to this, but there does seem to be a big difference between being able to answer questions, and actual intelligence. For example I have some product ideas in my head for things to 3D print, but I don't know enough about design to come up with the exact mechanisms and hinges for it. I've tried the chatbots but none of them can really tell me anything useful. But once I already know the answer, they can list all kinds of details and know all about the specific mechanisms. But are completely unable to suggest them to me when I don't mention them by name in the prompt. | |
| ▲ | stretchwithme a day ago | parent | prev [-] | | AI is useful to people who read and understand the answers and who would have eventually come up with a similar result on their own. They have judgement. They can improve what was generated. They can fix a result when it falls short of the objective. And they know when to give up on trying to get AI to understand. When rephrasing won't improve next word prediction. Which happens when the situation is complex. | | |
| ▲ | bigstrat2003 a day ago | parent | next [-] | | > AI is useful to people who read and understand the answers and who would have eventually come up with a similar result on their own. I am such a one, and AI isn't useful to me. The answers it gives me are routinely so bad, I can just answer my own questions with a search engine or product documentation faster than I can get the AI to give me something. Often enough I can never get the AI to give me something useful. The current products are shockingly bad relative to the level of hype being thrown about. | | |
| ▲ | bluefirebrand 15 hours ago | parent [-] | | > Often enough I can never get the AI to give me something useful. The current products are shockingly bad relative to the level of hype being thrown about. Yeah, I agree. This has been a big source of imposter syndrome for me lately, since all of this AI coding stuff has skyrocketed People making wild claims about building these incredible things overnight, but meanwhile I can't get anything useful out of them at all Something isn't adding up. Either I'm not a very good programmer, or others are lying about how much the AI is doing for them And I'm pretty sure I'm a pretty good programmer |
| |
| ▲ | pempem a day ago | parent | prev [-] | | It also happens when you ask it to do simple things like create comments. At least 400 words and still it will regurgitate/synthesize, often with a "!", the content you're asking it to comment on. | | |
|
|