Remix.run Logo
nilkn 2 days ago

Look, for most corporate jobs, there's honestly no way that you truly cannot find any kind or level of usage of AI tools to make you at least a bit more productive -- even if it's as simple as helping draft emails, cleaning up a couple lines of code here and there, writing a SQL query faster because you're rusty with it, learning a new framework or library faster than you would have otherwise, learning a new concept to work with a cross-functional peer, etc. It does not pass the smell test that you could find absolutely nothing for most corporate jobs. I'd hazard a guess that this attitude, which borders on outright refusal to engage in a good-faith manner, is what they're trying to combat or make unacceptable.

Zagreus2142 2 days ago | parent | next [-]

If the corporate directive was to share "if AI has helped and how" I would agree. But my company started that way and when I tested the new sql query analysis tool and reported (nicely and politely with positive feedback too) that it was making up whole tables to join to (assuming we had a simple "users" table with email/id columns which we did not have due to being a large company with purposefully segmented databases. The users data was only ever presented via api calls, never direct dB access).

My report was entirely unacknowledged along with other reports that had negative findings. The team in charge published a self-report about the success rate and claimed over 90% perfect results.

About a year later, upper management changed to this style of hard requiring LLM usage. To the point of associating LLM api calls from your intellij instance with the git branch you were on and requiring 50% llm usage on a per-pr basis otherwise you would be pip-ed.

This is abusive behavior aimed at generating a positive response the c suite can give to the board.

nilkn a day ago | parent [-]

I know you don't want to hear this, but I also know you know this is true: you would genuinely need to look at the full dataset that team collected to draw any meaningful conclusion here. Your single example means pretty much nothing in terms of whether the tool makes sense at large scale. Not a single tool or technology exists in this entire field that never fails or has issues. You could just as well argue that because you read something wrong on Google or Stack Overflow that those tools should be banned or discouraged, yet that is clearly false.

That said, I don't agree with or advocate the specific rollout methodology your company is using and agree that it feels more abusive and adversarial than helpful. That approach will certainly risk backfiring, even if they aren't wrong about the large-scale usefulness of the tools.

What you're experiencing is perhaps more poor change management than it is a fundamentally bad call about a toolset or technology. They are almost certainly right at scale more than they are wrong; what they're struggling with is how to rapidly re-skill their employee population when it contains many people resistant to change at this scale and pace.

Zagreus2142 a day ago | parent [-]

> I know you don't want to hear this, but I also know you know this is true

I wasn't sanctimonious to you, don't be so to me please.

> you would genuinely need to

> look at the full dataset that

> team collected to draw any

> meaningful conclusion here

I compared notes with a couple friends on other teams and it was the same for each one. Yes it's anecdotes but when the same exact people that are producing/integrating the service are also grading its success AND combine this very argument while hiding any data that could be used against them, I know I am dealing with people who will not tell the truth about what the data actually says.

nilkn a day ago | parent [-]

If you truly think the team responsible for this made a bad call, you need to go look at all the data they collected. Otherwise, yes, you're just sharing a couple anecdotes, and that is problematic and can't be brushed off or ignored. While it's possible that the people integrating the service just ignored negative feedback and are apparently pathological liars (as you accuse them of being), it's also possible that it's actually you who is ignoring most of the data and being disingenuous or manipulative about it. You are demonstrating a lot of paranoid, antagonistic thinking about a team that might just have a broader good-faith perspective than you do.

dukeyukey 2 days ago | parent | prev [-]

It's not a good-faith question to say "here's a new technology, write about how it made you more productive" and expect the answer to have a relationship with the truth. You're pre-ordaining the answer!

manquer a day ago | parent | next [-]

Lets imagine it is 1990 and the tool is e-mail over snail mail. Would you want leadership of a company to allow every employee to find out on their own if email is better way to communicate despite the spam, impersonal nature, security and myriad other issues that patently exist to this day ? or allow exceptions if an employee insists (or even shows) how snail is better for them?

It is hardly feasible for an organization to budget time for replicating and validating results, form their own conclusions, for any employee form who wishes to question the effectiveness of the tool or the manner of deployment.

Presumably the organization has done that validation with reasonably sized sample of similar roles over significant period of time. It doesn't matter though, it would be also sound reasoning for leadership to take a strategic call even when such tests are not conducted or not applicable.

There are costs and time associated with accurate validation which they are unable / unwilling to wait or even pay for, even if they wish to. The competition is moving faster and not waiting, so deploying now rather than wait and validate is not necessarily even a poor decision.

---

Having said that, they can articulate their intent better than "write about how it made you more productive", by adding more description along the lines of "if not then explain all the things you have tried to try and adopt the tool and what and how it did not go well for you/ your role"

Typically well structured organizations with in-house I/O psychologists would add this kind of additional language in the feedback tooling, line managers may not be as well trained to articulate it in informal conversations, which is whole different kind of problem.

nilkn 2 days ago | parent | prev [-]

The answer isn't pre-ordained -- it's simply already known from experience, at least to a sufficient degree to not trust someone claiming it should be totally avoided. Like I said, there are not many corporate roles where it's legitimately impossible to find any kind of gain, even a small or modest one, anywhere at all.