Remix.run Logo
proofofcontempt 2 hours ago

What is described here closely resembles my experience too.

My company is full of managers who haven't written code in years. They hired an architect 18 months ago who used AI to architect everything. To the senior devs it was obvious - everything was massively over engineered, yet because he used all the proper terminology he sounded more competent to upper management than the other senior managers who didn't. When called out, he would result to personal attacks.

After about 6 months, several people left and the ones who stayed went all in on AI. They've been building agentic workflows for the past 12 months in an effort to plug the gap from the competent members of staff leaving.

The result, nothing of value has been released in the past 18 months. The business is cutting costs after wasting massive amounts on cloud compute on poorly designed solutions, making up for it by freezing hiring.

switchbak 2 hours ago | parent | next [-]

I think for a lot of companies, AI is a destabilizing force that their managerial structure is unable to compensate for.

When you change the economics to such a degree, you're basically removing a dam - resulting in far more stress on the rest of the system. If the leaders of the org don't see the potential downsides and risks of that, they're in for a world of hurt.

I think we're going to see a real surge of companies just like this - crash and burn even though this tech was sold as being a universal improvement. The ones that survive will spread their knowledge about how to tame this wild horse, and ideally we'll learn a thing or two in the future.

But the wave of naivety has surprised me, and I think there's an endless onrush of people that are overly excited about their new ability to vibe-code things into existence. I think we've got our own endless September event going on for the foreseeable future.

funimpoded an hour ago | parent | next [-]

I increasingly see “AI” as a sort of virus tuned to target management, specifically. Its output is catnip to them, and it’s going to be unavoidable for those who want to look good to superiors and peers (i.e. the #1 priority for managers) even as it adds no actual value whatsoever to what they do. People under them, too, will have to start burning tokens on bullshit to satisfactorily perform competence and “doing work”. Meanwhile, none of this is actually productive. It’s goddamn peacock feathers.

It’s like some kind of management parasite. I’m not even sure at this point that it’s going to lead to an overall productivity increase whatsoever for most sectors, because of this added drag on everything.

LinuxAmbulance 25 minutes ago | parent | next [-]

AI has made my work about 5-8x quicker, just because I'm able to have it cover a lot of the grunt work (update 42 if statements in 32 different files) that took time, but no particular skill.

I think the use cases where AI makes an economic improvement to the status quo for a business are rare, but they do exist, and they can be a significant improvement.

It's like the early days of the dotcom boom and bust - people thought the internet was good for every use case under the sun, including shipping people a single candy bar at a loss. After the dotcom bust, a lot of that went by the wayside, but there was a tremendous economic advantage to the businesses that were more useful when available on the internet.

pmg101 an hour ago | parent | prev | next [-]

I agree with everything you've said, but don't you think quite a lot of things have also been like this before, just to a lesser degree?

I've often had the sense that most of what is done inside companies is a kind of performance of work rather than work itself. Mostly all a big status game between various different factions. All actual value provided by just a few engineers here and there who are able to shut out the noise and build things.

eproxus 39 minutes ago | parent [-]

Things have probably always been like that, agree. I often try to see AI as a catalyst, that accelerates what already is.

In a good culture, with high competence and trust this can yield increased output (to some degree at least) and in a bad culture it will accelerate and expedite the dominating traits instead.

an hour ago | parent | prev | next [-]
[deleted]
tanvach 29 minutes ago | parent | prev [-]

This is very apt

bonesss an hour ago | parent | prev | next [-]

I’m an LLM enjoyer who also thinks that ‘er ‘jerbs are safe and, taken to their logical conclusion, most LLM-stroking online around coding reduces to an argument that we should be speaking Haskell to LLMs and also in specs and documentation (just kidding, OCaml is prettier). But also, I do a little business.

You’ve hit the real issue, IT management is D-tier and lacks self awareness. “Agile” is effed up as a rule, while also being the simplest business process ever.

That juniors and fakers are whole hog on LLMs is understandable to me. Hype, fashion, and BS are always potent. The part I still cannot understand, as an Executive in spirit: when there is a production issue, and one of these vibes monkeys you are paying has to fix it, how could you watch them copy and paste logs into a service you’re top dollar paying for, over and over, with no idea of what they’re doing, and also not be on your way to jail for highly defensible manslaughter?

We don’t pay mechanics to Google “how to fix car”.

20after4 12 minutes ago | parent | next [-]

> We don’t pay mechanics to Google “how to fix car”.

No, instead of google they just look it up on alldata.

tyyyy3 30 minutes ago | parent | prev [-]

The more difficult it is to trace one’s labour to output.. expect more theatrics ;)

atomicnumber3 an hour ago | parent | prev | next [-]

Honestly, the most impactful thing I've seen AI do for any workplace is serve as the ultimate excuse for whatever pet thing someone's wanted to do, that can't stand on its own merits, and what they really need is a solid excuse.

Rewrite that old crunchy system that has had 0 incidents in the last year and is also largely "done" (not a lot of new requirements coming in, pretty settled code/architecture)? It's actually one of our most stable systems. But someone who doesn't even write code here thinks the code is yucky! But that doesn't convince the engineers who are on-call for it to replace it for almost no reason. Well guess what. We can do it now, _because AI!!!_ (cue exactly what you think happens next happening next)

Need to lay off 10% of staff because you think the workers are getting too good of a deal? AI.

Need to convince your workers to go faster, but EMs tell you you can't just crack the whip? AI mandates / token spend mandates!

Didn't like code reviews and people nitpicking your designs? Sorry, code reviews are canceled, because of AI.

Don't like meetings or working in a team? Well now everyone is a team of 1, because of AI. Better set up some "teams" full of teams of 1, call them "AI-first" teams, and wait what do you mean they're on vacation and the service is down?

Etc. And they don't even care that these things result in the exact negative outcomes that are why you didn't do them before you had the excuse. You're happy that YOUR thing finally got done despite all the whiners and detractors. And of course, it turns out that businesses can withstand an absurd amount of dysfunction without really feeling it. So it just happens. Maybe some people leave. You hire people who just left their last place for doing the thing you just did and now maybe they spend a bit of time here. And the game of musical chairs, petty monarchies, and degenerate capitalism continues a bit longer.

Big props to the people who managed to invent and sell an excuse machine though. Turns out that's what everyone actually wanted.

LinuxAmbulance 23 minutes ago | parent [-]

> Need to lay off 10% of staff because you think the workers are getting too good of a deal? AI.

I think we're seeing a ton of that right now, and it's not slowing down any time soon it seems.

vkou an hour ago | parent | prev [-]

> I think for a lot of companies, AI is a destabilizing force that their managerial structure is unable to compensate for.

Absolutely. Giving a traditional company AI is like giving an unlimited supply of crystal-blue methamphetamine to a deadbeat pill addict.

It enables and supercharges all their worst impulses. Making a broken system more 'productive' doesn't do shit to make the users better off.

The work output everyone produces doubles, but the ratio of productive to net-negative work plummets.

2ndorderthought an hour ago | parent | prev | next [-]

I saw something really similar happen at my last few jobs. 2 jobs ago vibe coding wasn't even viable but some of the people went so hard on making everything so much more bloated with LLMs it was so hard to get yes or no answers for anything. 1 line slack, 20second question would get a response that was 2 pages of wishy washy blog posts with no answer. Follow ups generated more hours wasted.

My last job we watched a PM slowly become a vibe manager of vibe coders. He started inserting himself into technical discussions and using ai to dictate our direction at every step. We would reply but it got so laborious fighting against a human translating ai about topics they didn't understand people left. We weren't allowed to push back anymore either or our jobs would get threatened due to AI. Then they started mandating everyone vibe coded and the amount of vibe coding as being monitored. The pm got so disorganized being a pm and an engineer and an architect(their choice no one wanted this)that they would make multiple tickets for the same task with wildly different requirements. One team member would then vibe code it one way and another would another way.

It was so hard to watch a profitable team of 20 people bringing in almost 100million of profit a year go into nonutility and the most pointless work. I then left. I am trying my best to not be jaded by all of these changes to the software industry but it's a real struggle.

krptos 2 hours ago | parent | prev | next [-]

I've personally witnessed this:

1. My own manager now gives "expert advice and suggestions" using Claude based on his/her incomplete understanding of the domain.

2. Multiple non-technical people within the company are developing internal software tools to be deployed org wide. Hoping such demos will get them their recognition and incentives that they deserve. Management as expected are impressed and approving such POCs.

3. Hyperactive colleagues showcasing expert looking demos that leadership buys. All the while has zero understanding of what's happening underneath.

I didn't know how to articulate this problem well, but this article does a great job!

tyyyy3 33 minutes ago | parent | prev | next [-]

Exactly what I expected to read after reading the first part of your post lol.

I’m starting to realise, many people and the management themselves don’t really understand why the firm exists, and what they do. Funny to watch tbh

a34729t 2 hours ago | parent | prev | next [-]

We don't need AI for not producing anything of value in a large company, though it certainly helps us produce even less!

ryandrake 2 hours ago | parent | prev | next [-]

I'm sure they're even more all-in on AI every month. "We will surely succeed if only we AI even harder!" This is how self-reinforcing delusions work. "AI will close the gap" is the fixed belief, and any evidence that comes in is interpreted such that it strengthens that belief.

proofofcontempt 2 hours ago | parent [-]

Pretty much this. It's like a cult mentality. Those who critique the approach or push back get sidelined. There are demos every week of essentially Claude loops and MCP integrations and those of us not reaffirming the ideas stopped getting invited.

Heard some wild statements in the past few months. A couple that come to mind:

- "we don't need to review the output closely, it's designed to correct itself" - "it comes up with the requirements, writes the tickets, and prioritises what to work on. We only need to give it a two or three line prompt"

The promise of this agentic workflow is always only a few weeks away. It's not been used to build anything that has made it to production yet.

ryandrake an hour ago | parent [-]

> The promise of this agentic workflow is always only a few weeks away. It's not been used to build anything that has made it to production yet.

"We just need a swarm of many agents, all independently operating open-loop, creating and resolving tickets continuously. We will surely ship to production soon after implementing that!"

Traubenfuchs 39 minutes ago | parent | prev | next [-]

My company hired a lead architect and he stayed with us for less than a year. He introduced some overengineered shit we are still recovering from. How those people get to where they are and get hired for that kind of position is beyond me.

gregrata an hour ago | parent | prev | next [-]

"hired an architect 18 months ago who used AI to architect everything"

Huh? 18 months ago? I've been using it that long - it wasn't able to do that back then....

2ndorderthought an hour ago | parent | next [-]

I had a similar situation 2 years ago. Correct these tools could not do those things, but people still used them for it. As well as diagnosing their dogs with cancer and whatever else.

dolebirchwood 37 minutes ago | parent | prev [-]

> it wasn't able to do that back then

It was, if you accept that it did so poorly.

AIorNot 2 hours ago | parent | prev | next [-]

Yes I get your frustration, the same thing is happening across orgs these days as claude and co-work has become widespread.

Wisdom is a thing, so is competence. Humans have it or they don't but machines do not (yet), but the massive capabilities of the tools are also something that can't be ignored.

We can't throw the baby out with the bathwater. It's going to take some cycles of learning the ropes with this technology for humans to understand it better.

I would push back -why couldn't the senior devs communicate these issues to senior management? It sounds like a broken human system not a broken tool or technology. All AI did was shine a light on the human issues on that org.

saganus 2 hours ago | parent [-]

From past experiences (and I'm sure I'm not alone here), I can almost guarantee that the senior devs did communicate the problems, but they were ignored or brushed aside.

Very seldomly does middle/upper management truly listens to engineers, unless there's buy-in from the CTO/VP to champion the ideas and complaints.

hn_acc1 an hour ago | parent | next [-]

Over time, as devs get more experience, they have seen countless fads come and go. Some worked, some screwed things up, etc. - NONE were the silver bullet / savior that they were touted to be by adherents. So they learn a default "no" or "slowly" response to "we need to do this <buzzword> ASAP" from management who only see $$$. I mean AI companies are telling management that devs will resist AI because "it's so good it will let you replace them", so management is getting their views reinforced by devs saying it's a bad idea.

bonesss 4 minutes ago | parent [-]

Yeah, the developers who will argue and teeth-gnash about using an ORM for weeks on the hope it will save a few hours perceived as boring or obvious are, simultaneously, annoyed and upset at being told to save time with super tools that save time and effort…

Pay no attention to the software output or quality or competitive displacement of the people selling you tools. LLMs, like cheesy sales strategies, are something so lucrative the only thing you can really do is sell them first come first serve to other people. Makes so much sense. Why make infinite money when you can sell a course/tool to naive and less fortunate companies? So logical.

proofofcontempt an hour ago | parent | prev [-]

The CTO got fired last month, presumably for poor performance. And the director that has taken is place is now all in on AI because he's desperate to turn things around but has no idea how.

2ndorderthought an hour ago | parent | next [-]

He doesn't care. When c suite gets fired they get like half a million in severance and go rinse and repeat somewhere else

htrp 39 minutes ago | parent | prev [-]

Was the CTO advocating a more measured approached to ai adoption?

throwaway613746 2 hours ago | parent | prev [-]

[dead]