Remix.run Logo
ryandrake 2 days ago

My management chain has recently mandated the use of AI during day-to-day work, but also went the extra step to mandate that it make us more productive, too. Come annual review time, we need to write down all the ways AI made our work better. That positive outcome is pre-supposed: there doesn't seem to be any affordance for the case where AI actually makes your work worse or slower. I guess we're supposed to ignore those cases and only mention the times it worked.

It's kind of a mirror image of the global AI marketing hype-factory: Always pump/promote the ways it works well, and ignore/downplay when it works poorly.

noosphr 2 days ago | parent | next [-]

Just ask an Ai to write how it made you more productive in daily work. It's really good at that. You can pad it out to 1m words by asking it to expand on each section of with subsections.

bdangubic a day ago | parent | next [-]

if one works at a place like ryandrake for sure so much this :) also ask it to ultrathink and be super comprehensive, you’ll be promoted in no time

Gud a day ago | parent | prev [-]

Brilliant!

belval 2 days ago | parent | prev | next [-]

I was in a lovely meeting where a senior "leader" was looking at effort estimates and said "Do these factor in AI-tools? Seems like it should be at least 30% lower if it did."

Like I use AI tools, I even like using them, but saying "this tool is so good it will cut our dev time by 30%" should be coming from the developers themselves or their direct manager. Otherwise they are just making figures up and forcing them onto their teams.

scrumper 2 days ago | parent [-]

I was that manager. I dunno about your senior leader but with me it was coming from a healthy place. After a few months of ra-ra from the C suite about how we were now an AI-first company (we're a tech consultancy building one-off stuff for customers) and should be using it in all our customer projects, I asked the question, quite reasonably I thought, "so am I going to offer lower prices to my clients, or am I going to see much higher achieved margins on projects I sell?"

And, crickets. In practice I haven't seen any efficiencies despite my teams using AI in their work. I am not seeing delivery coming in under estimates, work costs what it always cost, we're not doing more stuff or better stuff, and my margins are the same. The only difference I can see is that I've had to negotiate a crapton of contractual amendments to allow my teams to use AI in their work.

I still think it's only good for demos and getting a prototype up and running which is like 5% of any project. Most technical work in enterprise isn't going from zero to something, it's maintaining something, or extending a big, old thing. AI stinks at that (today). You startup people with clean slates may have a different perspective.

utopiah a day ago | parent | next [-]

> I still think it's only good for demos and getting a prototype

Have you actually tried that? Because my bet is that if your "prototype" is a anything that is very VERY traditional, e.g. a CMS, online shop, or anything that has examples online, yes it will be quick, but if it's genuinely new, namely something NOT available out there, maybe because it is relying on the latest stack that is not yet well documented, then I bet it will also fail terribly.

Edit: I personally did, namely using LLMs to make XR demos relying on a now relatively popular framework https://aframe.io and basically it fails most of the time by proposing "traditional" HTML/CSS, missing entirely that it's 3D. Anyway, long story short, didn't work for me so curious to know if the "getting a prototype" (a genuine prototype, not a codebase starting from scratch because IMHO that's different) part is validated or just an idea.

scrumper a day ago | parent [-]

I've only seen it do prototypes of CRUD apps, e-commerce storefronts, dashboards - vanilla stuff in other words. Get an end-to-end skeleton up and running fast; it's really good for that.

Never seen it used for anything novel so I can't refute you.

stiglitz 2 days ago | parent | prev [-]

Another possibility: your teams are working less now.

scrumper a day ago | parent | next [-]

Yes true, that is a possibility and something that's quite hard to police remotely. It doesn't feel likely, everyone's working kind of the same, booking the same hours and overtime is no different, but I can't rule it out.

It'd mean that hundreds of people would all be goofing off silently. I'd expect at least overtime bookings to decrease and they haven't - even with our strong incentives to not book o/t.

a day ago | parent [-]
[deleted]
a day ago | parent | prev [-]
[deleted]
Yoric 2 days ago | parent | prev | next [-]

« AI has made me productive by writing most of the answer to this question. You may ignore everything after this sentence, it is auto-generated purely from the question, without any intersection with reality. »

pjc50 2 days ago | parent | prev | next [-]

It's amazing how US business culture has reinvented Soviet Stakhanovism.

mallowdram a day ago | parent | next [-]

This is absolutely dead-on.

cowpig 2 days ago | parent | prev | next [-]

What do you mean by this? My understanding is that Stakhanovism is kind of the opposite of US work culture in that it lionizes the worker and social contributions

fasbiner 2 days ago | parent | next [-]

Your understanding is somewhat incomplete. There is a strong top-down push to celebrate output maximization without inquiring too closely about if this output is meaningful, valuable, repeatable, or even happened at all.

justonceokay 2 days ago | parent | prev | next [-]

It’s about making the productivity metrics go up, even if at the expense of real productivity or common sense. The man who it was named after ironically (unironically?) faked his own metrics

ryandrake 2 days ago | parent [-]

It doesn't seem to even be about making metrics go up. It's about telling a narrative-reinforcing story: That AI is great. It's worth it. Leadership is right to be obsessed with it.

I would have thought that in a fight between "fooling ourselves with a story" and "metrics go up" that the metrics would win, but it seems to not be the case.

pjc50 a day ago | parent | next [-]

Oh, if the narrative pressure is strong enough people will just fake the metrics.

The promise of replacing your expensive staff who talk back with cheap malleable AI is just too tempting.

justonceokay 2 days ago | parent | prev | next [-]

what you said is what I was thinking. Thanks for phrasing it so eloquently

XorNot 2 days ago | parent | prev [-]

Same thing happened to work from home. Meta straight up sabotaged the reason for it's own rebrand in service of it.

smcin 18 hours ago | parent [-]

How do you mean?

XorNot 2 hours ago | parent [-]

A return to the office mandate from a company that was not only selling augmented reality but specifically selling remote workplace themed augmented reality applications rather undermines the message.

smcin an hour ago | parent [-]

Not really: Meta's overall brand was not in general about using AR headsets in the workplace, if you factor in IG and WA. Developing AR is not necessarily the same as using it for leisure let alone using it in the workplace, notwithstanding Meta's marketing claiming it could be used for that. (Are there any stats to show that business users in general ever believed Meta's marketing on AR? I'm not aware of any.)

Also, the RTO mandate can serve as a basis/pretext for layoffs and cost consolidation, although CXOs tend not to admit that directly.

pjc50 a day ago | parent | prev [-]

> Stakhanovism is kind of the opposite of US work culture in that it lionizes the worker and social contributions

Lionizes the worker for their output. Just the same as quota or target driven systems. It's the same kind of lionization as "employee of the month" schemes, i.e. a sham to encourage people to work harder.

Mistletoe 2 days ago | parent | prev [-]

The President is a Soviet planted saboteur, it's not that surprising, it's coming from the top down. I assume this is the US manufacturing revolution he has in mind.

https://en.wikipedia.org/wiki/Stakhanovite_movement

>In 1988, the Soviet newspaper Komsomolskaya Pravda stated that the widely propagandized personal achievements of Stakhanov actually were puffery. The paper insisted that Stakhanov had used a number of helpers on support work, while the output was tallied for him alone.

rsynnott a day ago | parent | prev | next [-]

> My management chain has recently mandated the use of AI during day-to-day work, but also went the extra step to mandate that it make us more productive, too. Come annual review time, we need to write down all the ways AI made our work better.

Bloody hell. That feels like getting into borderline religious territory.

yifanl 2 days ago | parent | prev | next [-]

Ways AI have made me more productive: Spellcheck has reduced the number of typos I've made in slack threads between 4 and 10%.

mattgreenrocks 2 days ago | parent | prev | next [-]

Fascinating example of corporate double-speak here!

> My management chain has recently mandated the use of AI during day-to-day work, but also went the extra step to mandate that it make us more productive, too.

Now they're on record as pro-AI while the zeitgeist is all about it, but simultaneously also having plausible deniability if the whole AI thing crumbles to ashes: "we only said to use it if it helped productivity!"

Do you see? They cannot be wrong.

gdulli 2 days ago | parent | prev | next [-]

> but also went the extra step to mandate that it make us more productive, too.

Before you make any decision, ask yourself: "Is this good for the company?"

duxup a day ago | parent | prev | next [-]

I wonder how much this has to do with the LinkedIn world where everyone is making "I made us 100% more efficient last week with AI!" type stuff.

I'm not normally on LinkedIn but recently was and with the AI stuff the "look at me" spam around AI seems like an order of magnitude more absurd than usual.

everdrive 2 days ago | parent | prev | next [-]

This must be how conspiracy theorists feel. How could a whole class of people (the professional managerial class) all decide at once that AI was a wonderful too we all must adopt now, and it's all going to make all of us more productive and we're 100% certain about it? It boggles the mind. I'm sure just it's just social contagion, hype, and profit motive, but it definitely feels like a conspiracy sometimes.

rsynnott a day ago | parent | next [-]

It's social contagion. "Management", as a class, is actually fairly vulnerable to this; this is only the latest of a long, long line of magical things which will make everything more productive. Remember Six Sigma (as a white-collar cult, rather than as a manufacturing methodology)?

nicbou a day ago | parent | prev | next [-]

Collective hysteria does not need to be planned. Sometimes things just fall into place, just like the conditions for a hurricane.

It seems to me like too many yearly bonuses are tied to AI implementation, due to FOMO amongst C-levels. The hype trickles down to developers afraid that they won't get hired in the new AI economy.

I don't think there's a conspiracy, just a storm front of perverse incentives.

pjc50 a day ago | parent | prev | next [-]

Spending billions of dollars on marketing works.

moomoo11 2 days ago | parent | prev [-]

There’s no conspiracy.

People making the decisions are 5%, they delegate to managers who delegate to their teams and all the way down.

Decision makers (not the guy who thinks corner radius should be 12 instead of 16, obviously) want higher ROI and they see AI working for them for high level stuff.

At low level things are never sane.

Before AI it was offshore. Now it’s offshore with AI.

Prepare for chaos, the machine priests have thrown open the warp gate. May the Emperor have mercy for us.

2 days ago | parent | prev | next [-]
[deleted]
thatfrenchguy 2 days ago | parent | prev | next [-]

It’s kind of a good way to make your business collapse though, because figuring out the kinds of problems where LLMs are useful and where they’ll destroy your productivity is extremely important

didibus 2 days ago | parent | prev | next [-]

Does your company have a stake in AI?

I suspect a lot of companies that go that route are pushing a marketing effort since they themselves have a stake in AI.

But I'd love to hear from truly customer only businesses, where AI is pure cost, with no upside, unless it truly pays for itself in business impact, and if they too are stuck in some justifying of their added cost loop to make their decision seem a good one no matter what, or if they are being more careful?

pkaye 2 days ago | parent | prev | next [-]

> Come annual review time, we need to write down all the ways AI made our work better.

That is where the AI come into full use.

vrighter a day ago | parent | prev | next [-]

yep, they have to justify the spend. Where I work they've literally disabled our ability to disable it via group policy. Statistical manipulation

cjbgkagh 2 days ago | parent | prev | next [-]

Just make shit up, or even better have the AI make shit up for you

Macha 2 days ago | parent [-]

The problem is, the shit that's made up will be used to justify the decision as a success and ensure the methodology continues.

cjbgkagh 2 days ago | parent | next [-]

If they’re mandating use like this I doubt it’s their only dysfunction. At least this one has a built in scapegoat.

2 days ago | parent | prev | next [-]
[deleted]
sodapopcan 2 days ago | parent | prev | next [-]

Sounds like they are going to consider it a success no matter what.

kingstnap 2 days ago | parent | prev | next [-]

Make it long enough that it's not worth any human's time reading it. Like full-on balls to the walls slop. Pass it between different LLMs to add additional conclusions and subheadings.

In fact, do it in parallel where one chatbot is adding another few pages here, and simultaneously and independently, another is adding different pages somewhere else and concatenate the results together.

Once you get about 25 pages of dense slop, just conclude that AI made writing this report 1000x more efficient.

shalmanese a day ago | parent [-]

Hah, that's not going to help, they're not going to read any individual response. They're going to feed the entire thing into the AI slop machine and ask it to generate them a summary. It's a slop human centipede.

jjk166 2 days ago | parent | prev [-]

Their ability to convince themselves they are geniuses far exceeds your ability to convince them they're not. They've already decided that the decision was a success. The only question is are you still going to be around to suffer the consequences when their delusions collide with objective reality?

obezyian 2 days ago | parent | prev | next [-]

I went through this shit an year ago. The reports had to be weekly, though.

Everything sounded very mandatory, but a couple of months later nobody was asking about reports anymore.

meindnoch 2 days ago | parent | prev | next [-]

Just give them an AI generated response.

cyanydeez a day ago | parent | prev | next [-]

Soundslike a case of Republicanism.

contingencies 2 days ago | parent | prev | next [-]

News just in, Nvidia dumped $100B in OpenAI to pump the failing bubble.

2 days ago | parent | prev | next [-]
[deleted]
nilkn 2 days ago | parent | prev [-]

Look, for most corporate jobs, there's honestly no way that you truly cannot find any kind or level of usage of AI tools to make you at least a bit more productive -- even if it's as simple as helping draft emails, cleaning up a couple lines of code here and there, writing a SQL query faster because you're rusty with it, learning a new framework or library faster than you would have otherwise, learning a new concept to work with a cross-functional peer, etc. It does not pass the smell test that you could find absolutely nothing for most corporate jobs. I'd hazard a guess that this attitude, which borders on outright refusal to engage in a good-faith manner, is what they're trying to combat or make unacceptable.

Zagreus2142 2 days ago | parent | next [-]

If the corporate directive was to share "if AI has helped and how" I would agree. But my company started that way and when I tested the new sql query analysis tool and reported (nicely and politely with positive feedback too) that it was making up whole tables to join to (assuming we had a simple "users" table with email/id columns which we did not have due to being a large company with purposefully segmented databases. The users data was only ever presented via api calls, never direct dB access).

My report was entirely unacknowledged along with other reports that had negative findings. The team in charge published a self-report about the success rate and claimed over 90% perfect results.

About a year later, upper management changed to this style of hard requiring LLM usage. To the point of associating LLM api calls from your intellij instance with the git branch you were on and requiring 50% llm usage on a per-pr basis otherwise you would be pip-ed.

This is abusive behavior aimed at generating a positive response the c suite can give to the board.

nilkn a day ago | parent [-]

I know you don't want to hear this, but I also know you know this is true: you would genuinely need to look at the full dataset that team collected to draw any meaningful conclusion here. Your single example means pretty much nothing in terms of whether the tool makes sense at large scale. Not a single tool or technology exists in this entire field that never fails or has issues. You could just as well argue that because you read something wrong on Google or Stack Overflow that those tools should be banned or discouraged, yet that is clearly false.

That said, I don't agree with or advocate the specific rollout methodology your company is using and agree that it feels more abusive and adversarial than helpful. That approach will certainly risk backfiring, even if they aren't wrong about the large-scale usefulness of the tools.

What you're experiencing is perhaps more poor change management than it is a fundamentally bad call about a toolset or technology. They are almost certainly right at scale more than they are wrong; what they're struggling with is how to rapidly re-skill their employee population when it contains many people resistant to change at this scale and pace.

Zagreus2142 a day ago | parent [-]

> I know you don't want to hear this, but I also know you know this is true

I wasn't sanctimonious to you, don't be so to me please.

> you would genuinely need to

> look at the full dataset that

> team collected to draw any

> meaningful conclusion here

I compared notes with a couple friends on other teams and it was the same for each one. Yes it's anecdotes but when the same exact people that are producing/integrating the service are also grading its success AND combine this very argument while hiding any data that could be used against them, I know I am dealing with people who will not tell the truth about what the data actually says.

nilkn a day ago | parent [-]

If you truly think the team responsible for this made a bad call, you need to go look at all the data they collected. Otherwise, yes, you're just sharing a couple anecdotes, and that is problematic and can't be brushed off or ignored. While it's possible that the people integrating the service just ignored negative feedback and are apparently pathological liars (as you accuse them of being), it's also possible that it's actually you who is ignoring most of the data and being disingenuous or manipulative about it. You are demonstrating a lot of paranoid, antagonistic thinking about a team that might just have a broader good-faith perspective than you do.

dukeyukey 2 days ago | parent | prev [-]

It's not a good-faith question to say "here's a new technology, write about how it made you more productive" and expect the answer to have a relationship with the truth. You're pre-ordaining the answer!

manquer a day ago | parent | next [-]

Lets imagine it is 1990 and the tool is e-mail over snail mail. Would you want leadership of a company to allow every employee to find out on their own if email is better way to communicate despite the spam, impersonal nature, security and myriad other issues that patently exist to this day ? or allow exceptions if an employee insists (or even shows) how snail is better for them?

It is hardly feasible for an organization to budget time for replicating and validating results, form their own conclusions, for any employee form who wishes to question the effectiveness of the tool or the manner of deployment.

Presumably the organization has done that validation with reasonably sized sample of similar roles over significant period of time. It doesn't matter though, it would be also sound reasoning for leadership to take a strategic call even when such tests are not conducted or not applicable.

There are costs and time associated with accurate validation which they are unable / unwilling to wait or even pay for, even if they wish to. The competition is moving faster and not waiting, so deploying now rather than wait and validate is not necessarily even a poor decision.

---

Having said that, they can articulate their intent better than "write about how it made you more productive", by adding more description along the lines of "if not then explain all the things you have tried to try and adopt the tool and what and how it did not go well for you/ your role"

Typically well structured organizations with in-house I/O psychologists would add this kind of additional language in the feedback tooling, line managers may not be as well trained to articulate it in informal conversations, which is whole different kind of problem.

nilkn 2 days ago | parent | prev [-]

The answer isn't pre-ordained -- it's simply already known from experience, at least to a sufficient degree to not trust someone claiming it should be totally avoided. Like I said, there are not many corporate roles where it's legitimately impossible to find any kind of gain, even a small or modest one, anywhere at all.