Remix.run Logo
ttul 3 days ago

This is cute. I think within 36 months AI will replace middle management in software companies. This will happen because, ironically, today’s middle managers will switch back to being individual contributors, using AI to contribute PRs once again (who doesn’t prefer this anyway?).

Sufficiently powerful AI can become the middle manager of everyone’s dreams. Wonderfully effective interpersonal skills, no personality defects. Fair and timely feedback.

Try to convince me this isn’t the case.

ttul 2 hours ago | parent | next [-]

Lots of great replies - thank you, everyone.

I think most of these objections are valid against a “ChatGPT-in-a-box is your manager” framing. That’s not what I meant by “AI replaces middle management”.

What I did mean is: within ~36 months, a large chunk of the coordination + information-routing + prioritization plumbing that currently consumes a lot of EM/PM time gets automated, so orgs can run materially flatter.

A few specifics to the questions:

“Where does the AI get the information?”

Not from vibes. From the same places managers already get it, but with fewer blind spots and better recall: issue trackers, PRs, incident timelines, on-call load, review latency, meeting notes, customer tickets, delivery metrics, lightweight check-ins. The “AI manager” is really a system with tools + permissions + audit logs, not a standalone LLM.

“How does it notice burnout / team health?”

Two parts: (1) observable signals (sustained after-hours activity, chronic context switching, on-call spikes, growing review queues, missed 1:1s, reduced throughput variance), and (2) explicit human input (quick pulse check-ins, opt-in journaling, “I’m overloaded” flags). Humans are still in the loop for the “I’m not okay” stuff. The AI just catches it earlier and more consistently than a busy manager with 8 directs and 30 Slack threads.

“Who sets objectives / what about conflicting goals?”

Exactly: humans. Strategy is still human-owned. But translating “increase reliability without killing roadmap” into day-to-day sequencing, tradeoff visibility, and risk accounting is where software can help a lot. Think: continuous, explainable prioritization that shows its work (“we’re pushing this because it reduces SEV risk by X and unblocks Y; here are the assumptions”).

“What about historic experience?”

You don’t “download” a manager’s career. You encode the org’s policies, past decisions, and constraints into an accessible memory: postmortems, decision records, architecture notes, norms. The AI won’t have wisdom-by-osmosis, but it will have perfect retrieval of “what happened last time we tried this” and it won’t forget the quiet lessons buried in docs.

“Will we reinvent office politics / will people game it?”

We already do. The difference is: an AI system can be designed to be harder to game because inputs can be cross-validated (tickets vs PRs vs customer impact vs peer feedback) and the rules can be transparent and audited. Also: if you try to game an AI that logs its reasoning, you leave a paper trail. That alone changes incentives.

“Relationships and trust can’t be automated.”

Agree. And that’s why I don’t think “management disappears.” I think it unbundles the human part (trust, coaching, hard conversations, hiring/firing accountability, culture) - that part stays human.

The mechanical part (status synthesis, dependency chasing, agenda generation, follow-up enforcement, draft feedback, metric hygiene, “what should we do next and why”) becomes mostly automated. But did everyone love that part anyway? I don't.

So the likely outcome isn’t “everyone reports to an API”. It’s: fewer layers, more player-coaches, and AI doing the boring middle-management work that currently eats the calendar.

In other words: I’m not claiming AI becomes the perfect human manager. I’m claiming it makes the org need less middle management by automating the parts that are fundamentally information processing.

DrScientist 3 days ago | parent | prev | next [-]

> Try to convince me this isn’t the case.

:-)

Where is the AI going to get the information required to do the job?

How is the AI going to notice that Bob looks a bit burnt out, or understand which projects to work on/prioritise?

Who is going to set the AI managers objectives? Are they simple or are they multi-factorial and sometimes conflicting? Does the objective function stay static over time? If not how is it updated?

How are you going to download all the historic experience of the manager to the AI or are they just going to learn on the job.

What happens when your manager AI starts talking to another teams manager AI? Will you just re-invent office politics but in AI form? Will you learn how to game your AI manager as you understand and potentially control all it's inputs?

pingananth 3 days ago | parent [-]

Wow, that's a lot of question and convoluted context that surely validates it's going to take time for AI to arrive there!

wordpad 3 days ago | parent | prev | next [-]

If we use outsourcing as proxy for what jobs will move to AI first, management jobs will be the last to be replaced.

Managing is about building relationships to coordinate and prioritize work and even though LLMs have excellent soft skills, they can't build relationships.

pingananth 3 days ago | parent [-]

Spot on. AI might simulate the message perfectly, but it can't hold the social capital and trust required to actually move a team when things get tough.

bdcp 3 days ago | parent | prev | next [-]

> Try to convince me this isn’t the case.

Have you tried AI to convince you otherwise?

gordonhart 3 days ago | parent | prev | next [-]

> Sufficiently powerful AI can become the middle manager of everyone’s dreams. Wonderfully effective interpersonal skills, no personality defects. Fair and timely feedback.

Linking Marshall Brain's ever-relevant novella "Manna" on this: https://marshallbrain.com/manna1

ttul 2 hours ago | parent [-]

> The girls liked it because Manna didn’t hit on them either. Manna simply asked you to do something, you did it, you said, “OK”, and Manna asked you to do the next step.

pingananth 3 days ago | parent | prev | next [-]

I actually ran this specific 'Backchannel VP' scenario through raw GPT-4 before building the hard-coded version, and the results were surprisingly 'meh.'

The missing piece wasn't intelligence, but statefulness and emotional memory.

A human manager (or VP) remembers that you embarrassed them in a meeting three weeks ago, and that hidden state dictates their reaction today. LLMs—currently—are too 'forgiving' and rational. They don't hold grudges or play power games naturally.

Until AI can simulate that messy, long-term 'political capital' (or lack thereof), I think we still need humans to navigate other humans. But I agree, for pure PR review and logical feedback, I'd take an AI manager any day!

rpdillon 3 days ago | parent | prev [-]

I'm not sure you understand the job. Do you have management experience? It's mostly about discussion, agreeing on how to proceed, and building relationships. It's not clear to me at all that people will want to work for AI instead of a real human that cares. I certainly wouldn't.

pingananth 3 days ago | parent [-]

Agreed. People work for people, not APIs. That human connection and the feeling that your manager actually cares ( hopefully :D ) is the one thing you can't automate away