Remix.run Logo
bachmeier 9 hours ago

> it is cutting jobs to offset its A.I. spending, saying last month that it would slash 10 percent of its work force.

> Meta also introduced internal dashboards to track employees’ consumption of “tokens,” a unit of A.I. use that is roughly equivalent to four characters of text, four people said. Some said the dashboards were a pressure tactic to encourage competition with colleagues. That led some employees to make so many A.I. agents that others had to introduce agents to find agents, and agents to rate agents, two people said.

Maybe the first to be laid off should be the ones that thought it made sense to track token consumption. Goodhart's Law doesn't even apply in this scenario because that's a dumb metric whether or not you're using it to evaluate employees.

superfrank 6 hours ago | parent | next [-]

My company did something similar (dashboard to track tokens). It was made available to managers about two weeks before it was available to everyone, so I got to see all my reports' usage before they knew they were being tracked.

The dashboard got announced publicly and just about everyone's usage went up by 100%-200% almost immediately and hasn't come back down, but nothing I'm tracking shows any increase in output since then. We absolutely saw productivity gains a few months ago, but it feels like now people are just burning tokens for the sake of it.

On top of that, as a reaction to the rising costs, we've now gone from unlimited token use to every engineer now having a monthly token budget of $600. I get why that was done, but we're a publicly traded US tech company worth 10s of billions of dollars. We're not hurting for money and the knock on effects are just crazy. For example, I had an engineer in sprint planning say about a large migration type ticket, "Can we hold that ticket until the end of the month? I don't want to burn through all my tokens this early in the month." I just cannot imagine that that's the culture that our executive team was trying to cultivate when they first purchased these tools.

I'm not anti-AI and actually really enjoy using AI for development, but over and over I've watched business leaders shoot themselves in the foot trying to force more AI use on their employees in pursuit of ever increasing productivity. I just keep thinking that there's no way that any productivity gains we've seen from the forced, tracked AI usage are enough to offset the productivity lost from anxiety and churn caused by the unrealistic productivity expectations, vanity metrics, and mass layoffs that have come along with increased AI adoption.

swingboy 3 hours ago | parent | next [-]

How were you measuring productivity gains (prior to the dashboard)?

superfrank 2 hours ago | parent [-]

Without going too into detail, my company is really, really big on estimates and predictable delivery timelines. An entire years worth of work is speced out, estimated, and scheduled by the end of October the previous year. It's a really terrible process, IMO, but it's the process so it is what it is.

Normally, most teams (mine included) are about 10% behind their plan by the end of Q1. This year my team is closer to 10% ahead despite the fact that we're down one engineer due to a small re-org at the end of last year. These projects were planned and estimated before AI was in heavy use and when, at best, most AI focused devs were still using it like smart auto-complete. Essentially, we estimated the projects before AI was heavily used and we're consistently beating those estimates by a good amount which is not how all previous years have gone.

The AI metrics dashboards didn't roll out until mid-March and, while I'm still seeing us beat our estimates from last year, we're not seeing any additional gains. Basically, all of Q1 we had AI and no dashboards and were beating 2025 estimates by X%. For Q2 we had dashboards and extra pressure to use AI and we're still seeing those X% gains, but no additional gains despite higher token usages.

We also have KPIs around completing a certain number of a certain type of Jira ticket that customers can file and we've seen a similar pattern of an sharp increase in tickets completed in Dec-Feb and then the new rate holds, but no additional increase after the company started pushing AI usage.

akomtu 5 hours ago | parent | prev [-]

Those executives are simply implementing the directive to inject as much AI as possible into every gear of the economy. Their bonuses depend on this. The idea is that if the world economy becomes dependent on this AI monstrosity, we won't be able to get rid of it. It will be like a situation with a nasty parasite that does a lot of harm, but cannot be removed without the host dying.

zerreh50 8 hours ago | parent | prev | next [-]

It will get really funny when they start imposing an exact number of tokens as a quota, where too little means you are an outdated luddite and too much is inefficient and wastes money

sardukardboard 8 hours ago | parent | prev | next [-]

A funny Goodhart’s Law parallel showed up in during GPT-5.1 training, where the model was rewarded for using the web search tool, so it learned the behavior of superficially using web search to calculate “1 + 1” and not utilize the result.

https://alignment.openai.com/prod-evals/

idle_zealot 8 hours ago | parent | prev | next [-]

> that's a dumb metric whether or not you're using it to evaluate employees

Only if you assume in good faith that the point is to evaluate employees for productivity on some stated goal for the company or role. If you try to view the metric from other possible positions, the one I think fits best is the promotion of token consumption by all means. This is useful for signaling to the broader market that AI is profitable and merits more investment, and may be part of a deal between them and whoever they're buying tokens from. It makes more sense to me that Meta would be more interested in leveraging its control over people to manipulate the state of the world, market, and general sentiment than having them work on stable, well-established and market-dominant software services that really only need to be kept chugging along. Isn't mass-manipulation their whole business? Why wouldn't they use their employees and internal structure to contribute?

strongpigeon 8 hours ago | parent [-]

Having worked in big tech, I can almost guarantee you you’re overthinking this.

sidewndr46 5 hours ago | parent | prev | next [-]

I'm reminded of the sales dashboard that tracked the number of calls each sales employee made. There was one employee in 1st place that I assume just always called the same customers multiple times. Her position was about 10x 2nd place.

If someone gave me unfettered access to inference of modern LLMs, there would be no concept of measurement other than the total system wide capacity of whatever the company had available.

strongpigeon 8 hours ago | parent | prev | next [-]

Not that I disagree with you, but I’ve heard of such tactic being used in some orgs at both Google and Microsoft as well.

It seems like a common conclusion from a management that wants to push for AI adoption. I doubt it’s super effective, but we’ll see how it turns out.

iugtmkbdfil834 8 hours ago | parent [-]

It gets worse, my corp is tech adjacent at best, so we get push to use AI, but also get heavily restricted tokens, ridiculous limits on internal tooling ( think context for one short prompt ) and expectation that now one should be able to create the $result fast anyway...

Edit: and if you question that, you are a troublemaker to add to the list

skybrian 6 hours ago | parent | prev | next [-]

This is a company incentive to increase expenses. Maybe not as bad as Dilbert's "I'm gonna myself a new minivan," but still.

logickkk1 7 hours ago | parent | prev [-]

[dead]