Remix.run Logo
Zagreus2142 2 days ago

If the corporate directive was to share "if AI has helped and how" I would agree. But my company started that way and when I tested the new sql query analysis tool and reported (nicely and politely with positive feedback too) that it was making up whole tables to join to (assuming we had a simple "users" table with email/id columns which we did not have due to being a large company with purposefully segmented databases. The users data was only ever presented via api calls, never direct dB access).

My report was entirely unacknowledged along with other reports that had negative findings. The team in charge published a self-report about the success rate and claimed over 90% perfect results.

About a year later, upper management changed to this style of hard requiring LLM usage. To the point of associating LLM api calls from your intellij instance with the git branch you were on and requiring 50% llm usage on a per-pr basis otherwise you would be pip-ed.

This is abusive behavior aimed at generating a positive response the c suite can give to the board.

nilkn a day ago | parent [-]

I know you don't want to hear this, but I also know you know this is true: you would genuinely need to look at the full dataset that team collected to draw any meaningful conclusion here. Your single example means pretty much nothing in terms of whether the tool makes sense at large scale. Not a single tool or technology exists in this entire field that never fails or has issues. You could just as well argue that because you read something wrong on Google or Stack Overflow that those tools should be banned or discouraged, yet that is clearly false.

That said, I don't agree with or advocate the specific rollout methodology your company is using and agree that it feels more abusive and adversarial than helpful. That approach will certainly risk backfiring, even if they aren't wrong about the large-scale usefulness of the tools.

What you're experiencing is perhaps more poor change management than it is a fundamentally bad call about a toolset or technology. They are almost certainly right at scale more than they are wrong; what they're struggling with is how to rapidly re-skill their employee population when it contains many people resistant to change at this scale and pace.

Zagreus2142 a day ago | parent [-]

> I know you don't want to hear this, but I also know you know this is true

I wasn't sanctimonious to you, don't be so to me please.

> you would genuinely need to

> look at the full dataset that

> team collected to draw any

> meaningful conclusion here

I compared notes with a couple friends on other teams and it was the same for each one. Yes it's anecdotes but when the same exact people that are producing/integrating the service are also grading its success AND combine this very argument while hiding any data that could be used against them, I know I am dealing with people who will not tell the truth about what the data actually says.

nilkn a day ago | parent [-]

If you truly think the team responsible for this made a bad call, you need to go look at all the data they collected. Otherwise, yes, you're just sharing a couple anecdotes, and that is problematic and can't be brushed off or ignored. While it's possible that the people integrating the service just ignored negative feedback and are apparently pathological liars (as you accuse them of being), it's also possible that it's actually you who is ignoring most of the data and being disingenuous or manipulative about it. You are demonstrating a lot of paranoid, antagonistic thinking about a team that might just have a broader good-faith perspective than you do.