| ▲ | resiros 3 days ago |
| Here is the report: https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Bus... The story there is very different than what's in the article. Some infos: - 50% of the budgets (the one that fails) went to marketing and sales - the authors still see that AI would offer automation equaling $2.3 trillion in labor value affecting 39 million positions - top barriers for failure is Unwillingness to adopt new tools, Lack of executive sponsorship Lots of people here are jumping to conclusions. AI does not work. I don't think that's what the report says. |
|
| ▲ | rawgabbit 3 days ago | parent | next [-] |
| This stood out to me in the report: A corporate lawyer at a mid-sized firm exemplified this dynamic. Her organization invested $50,000 in a specialized contract analysis tool, yet she consistently defaulted to ChatGPT for drafting work: "Our purchased AI tool provided rigid summaries with limited customization options. With ChatGPT, I can guide the conversation and iterate until I get exactly what I need. The fundamental quality difference is noticeable, ChatGPT consistently produces better outputs, even though our vendor claims to use the same underlying technology." This pattern suggests that a $20-per-month general-purpose tool often outperforms bespoke enterprise systems costing orders of magnitude more, at least in terms of immediate usability and user satisfaction. This paradox exemplifies why most organizations remain on the wrong side of the GenAI Divide.
|
| |
| ▲ | kami23 3 days ago | parent | next [-] | | That's a similar story for me at $DAYJOB. We have copilot for our IDEs and it is so much worse than Claude code or any other CLI integration option. We're so restricted on adopting the features as fast as they are turned on. I try to use it during the day and end up frustrated that the agent mode is restricted and returns "I can't complete that for you" or something similar when asking for pretty reasonable actions. I've been cranking out personal apps with Claude Code in contrast and my brain is exploding with ideas for day job, but this is such an organic space that the speed that corporations move at they are using the cool tool from last year has left me demoing personal work to coworkers and hoping that starts to move the needle on getting better tooling. I understand the governance and privacy concerns for $DAYJOB, and as such every tool needs to get approved by a slow human process. We also have OpenAI access and I have found myself using that for research more so than copilot as well, maybe we just picked the worst tool because of that MS vertical integration... | | |
| ▲ | jamwil 3 days ago | parent [-] | | All I have is copilot as well, but with that I can configure Aider to use the copilot openai endpoint, and through that access most of the good models with a capable CLI tool. It’s a pair-programming experience more than an agentic one but I need to stay close to the code anyway. |
| |
| ▲ | johnnyanmac 3 days ago | parent | prev [-] | | >This pattern suggests that a $20-per-month general-purpose tool often outperforms bespoke enterprise systems costing orders of magnitude more $20/month? Is "mid-sized" different than I imagined, or was this 3-4 years ago? We're already seeing model subscriptions balloon. I wouldn't be surprised if these approach typical enterprise prices in a few more years. |
|
|
| ▲ | johnnyanmac 3 days ago | parent | prev | next [-] |
| > AI would offer automation equaling $2.3 trillion in labor value affecting 39 million positions But >Current automation potential: 2.27% of U.S. labor value Given the US GDP right now is 27 trillion, I'm not sure if this is really mathing out in my head. Wwe're going to potentially optimize 61 billion dollars of US labor value while displacing some 15% of the American labor force, and return back 2.3 trillion in value? Who's purchasing all this (clearly not the workforce)? Meanwhile, investments in AI as of 2025 is already hitting half of that. Granted, GDP is an odd indicator to measure on for this situation. But I'm unsure how else we measure "labor value" here. |
| |
| ▲ | therealpygon 3 days ago | parent [-] | | I’m not sure how you got that 2.2% of 18.5 trillion in GDP attributed to labor is 61 billion, so I’d agree that math doesn’t seem accurate. Additionally, you seemed to have pulled the cherry-picked quote and compared with the “current” impact and ignored the immediately following text on latent automation exposure (partially extracted for quote) that explains how it could have a greater impact that results in their 2.3t/39m estimate numbers. Seems odd to find those numbers in the report but not read the rest of the same section. | | |
| ▲ | johnnyanmac 3 days ago | parent [-] | | >I’m not sure how you got that 2.2% of 18.5 trillion in GDP attributed to labor is 61 billion The number I googled for 2024 US GDP was 29.18 trillion, so thats part of it. I'm flexibke enough to adjust that if wrong. >Additionally, you seemed to have pulled the cherry-picked quote and compared with the “current” impact and ignored the immediately following text on latent automation exposure There's no time scale presented in that section thst I can find for the "latent" exposure, so its not very useful as presented. That's why I compared it to now. Over 5 years; I'm not sure but it can be realistic. Over 20 years, If the US GDP doesn't absolutely tank, that's not necessary as impressive a number as it sounds. You see my confusion here? >that explains how it could have a greater impact that results in their 2.3t/39m estimate numbers. Maybe I need to read more of the article, but I need a lot more numbers to be convinced of a 40x efficiency boost (predicted returns divided by current gdp value times their 2.2% labor value) for anything. Even the 20x number if I used your gpd number is a hefty claim. >Or presented a better metric than my formula above on interpreting "impact". I'm open to a better model here than my napkin math. | | |
| ▲ | lr1970 3 days ago | parent | next [-] | | I think you made a arithmetic mistake by factor of 10. 2% of 29 trillion is 580 billions. Your number should be 610 billion, not 61 billion. | |
| ▲ | therealpygon 3 days ago | parent | prev [-] | | I would consider reading the actual report more closely rather than an article of questionable accuracy. For example: > “For instance, an employee can adjust based on new instructions, previous mistakes, and situational needs. A generative AI model cannot carry that memory across tasks unless retrained.” This is factually false; that is exactly what memory, knowledge, and context can do with no retraining. Not having completely solved self adjustment is not a barrier, merely a hurdle already currently in research. Imagine if, like the human brain, an LLM were to apply training cases identified throughout the day while it “slept”; the author seems to think this would be a massive undertaking of “retraining”. And sorry, if you’ve worked with many of the same types of employees I have over there years, you’d already know that the suggestion employees are more easily adaptable, will remember across tasks, and are good at adjusting to situational needs, can be laughable and even detrimental to think, depending on the person. The statement seems to be based more on the complaint of a lawyer who has no actual AI technical expertise; hardly the best source for what things AI can and cannot do “currently”. It’s useful to consider that almost all of the subjective opinions expressed in this report come from, effectively, 300 or so (maybe less) individuals, and that it isn’t all that easy to distinguish between the findings that are truly fact-based or opinion-based, especially with the linked post. It is also important to note that this report seems to focus more on the feedback and data from CEOs who look at P&L, not intrinsic or unquantified values. How do you directly quantify a developer fixing 3 bugs instead of 1 in your internal tool? Unless there are layoffs attributed to this specifically, and not “market changes” or general “reorganizations”, how is this quantified? There are a million things AI might do in the future that may not have a massive, or any, clear return on investment. If I buy a better shovel that saves me an hour on digging a trench in my own backyard, how much money did that save me? GDP is 29.2t, of which an additional google would find that U.S. labor accounts for an estimated 18.5t. 2.2% of 18.5t, or 29.2t, is still not 61m. In most cases, if the simple part of the math doesn’t fit, there are potentially some bigger logic mistakes at play. Best of luck on your understanding. As I said, I’d suggest maybe starting with direct statements from factual sources and the report rather than those the author (or you) interpreted. |
|
|
|
|
| ▲ | chriskanan 3 days ago | parent | prev | next [-] |
| That's my assessment of the report as well.... really, some news truly is "fake" where they are pushing a narrative that they think will drive clicks and eyeballs, and the media is severely misrepresenting what is in this report. The failure is not AI, but that a lot of existing employees are not adopting the tools or at least not adopting the tools provided by their company. The "Shadow AI economy" they discuss is a real issue: People are just using their personal subscriptions to LLMs rather than internal company offerings. My university made an enterprise version of ChatGPT available to all students, faculty, and staff so that it can be used with data that should not be used with cloud-based LLMs, but it lacks a lot of features and has many limitations compared to, for example, GPT-5. So, adoption and retention of users of that system is relatively low, which is almost surely due to its limitations compared to cloud-based options. Most use-cases don't necessarily involve data that would be illegal to use with a cloud-based system. |
| |
| ▲ | hakfoo 3 days ago | parent | next [-] | | My team has been chewed out for "just because it didn't work once, you need to keep trying it." That feels, to be blunt, almost religious. Claude didn't bless you because you didn't pray often enough and weren't devout enough. Maybe we need to not just say "people aren't adopting it" but actually listen to why. AI is a new tool with a learning curve. But that means it's a luxury choice-- we can spend our days learning the new tool, trying out toy problems, building a workflow, or we can continue to use existing tools to deliver the work we already promised. It's also a tool with an absolutely abysmal learning model right now. Think of the first time you picked up some heavy-duty commercial software (Visual Studio, Lotus 1-2-3, AutoCAD, whatever). Yes, it's complex. But for those programs, there were reliable resources and clear pathways to learn it. So much of the current AI trend seems to be "just keep rewording the prompt and asking it to think really hard and add more descriptive context, and eventually magic happens." This doesn't provide a clear path to mastery, or even solid feedback so people can correct and improve their process. This isn't programming. It's pleading with a capricious deity. Frustration is understandable. If I have to use AI, I find I prefer the Cursor experience of "smarter autocomplete" than the Claude experience of prompting and negotiation. It doesn't have the "special teams" problem of having to switch to an entirely different skill set and workflow in the middle of the task, and it avoids dumping 2000 line diffs so you aren't railroaded into accepting something that doesn't really match your vision/style/standards. What would I want to see in a prompt-based AI product? You'd have much more documented, formal and deterministic behaviour. Less friendly chat and more explicit debugging of what was generated and why. In the end, I guess we'd be reinventing one of those 1990s "Rapid Application Development" environments that largely glues together pre-made components and templates, except now it burns an entire rainforest to build one React SPA. Has anyone thought about putting a chat-box front end around Visual Basic? | |
| ▲ | franktankbank 3 days ago | parent | prev [-] | | Absolutely, its a failure of the workers. |
|
|
| ▲ | didibus 3 days ago | parent | prev | next [-] |
| > affecting 39 million positions Wow, that is crazy. There's 163 million working Americans, that's close to a quarter of the workforce is at risk. |
|
| ▲ | baal80spam 3 days ago | parent | prev [-] |
| > Lots of people here are jumping to conclusions. AI does not work. I don't think that's what the report says. Well... "It is difficult to get a man to understand something when his salary depends upon his not understanding it" |