| ▲ | upupupandaway 4 hours ago |
| I work for a large tech company, and our CTO has just released a memo with a new rubric for SDEs that includes "AI Fluency". We also have a dashboard with AI Adoption per developer, that is being used to surveil the teams lagging on the topic. All very depressing. A friend of mine is an engineer of a large pre-IPO startup, and their VP of AI just demanded every single employee needs to create an agent using Claude. There were 9700 created in a month or so. Imagine the amount of tech debt, security holes, and business logic mistakes this orgy of agents will cause and will have to be fixed in the future. edit: typo |
|
| ▲ | steveBK123 4 hours ago | parent | next [-] |
| This is absolutely the norm across corporate America right now.
Chief AI Czars enforcing AI usage metrics with mandatory AI training for anyone that isn't complying. People with roles nowhere near software/tech/data are being asked about their AI usage in their self-assessment/annual review process, etc. It's deeply fascinating psychologically and I'm not sure where this ends. I've never seen any tech theme pushed top down so hard in 20+ years working. The closest was the early 00s offshoring boom before it peaked and was rationalized/rolled back to some degree. The common theme is C-suite thinks it will save money and their competitors already figured out out, so they are FOMOing at the mouth about catching up on the savings. |
| |
| ▲ | asa400 3 hours ago | parent | next [-] | | > I've never seen any tech theme pushed top down so hard in 20+ years working. > The common theme is C-suite thinks it will save money and their competitors already figured out out, so they are FOMOing at the mouth about catching up on the savings. I concur 100%. This is a monkey-see-monkey-do FOMO mania, and it's driven by the C-suite, not rank-and-file. I've never seen anything like it. Other sticky "productivity movements" - or, if you're less generous like me, fads - at the level of the individual and the team, for example agile development methodologies or object oriented programming or test driven development, have generally been invented and promoted by the rank and file or by middle management. They may or may not have had some level of industry astroturfing to them (see: agile), but to me the crucial difference is that they were mostly pushed by a vanguard of practitioners who were at most one level removed from the coal face. Now, this is not to say there aren't developers and non-developer workers out there using this stuff with great effectiveness and singing its praises. That _is_ happening. But they're not at the leading edge of it mandating company-wide adoption. What we are seeing now is, to a first approximation, the result of herd behavior at the C-level. It should be incredibly concerning to all of us that such a small group of lemming-like people should have such an enormously outsized role in both allocating capital and running our lives. | | |
| ▲ | agentultra an hour ago | parent [-] | | And telling us how to do our jobs. As if they've ever compared the optimized output of clang and gcc on an example program to track down a performance regression at 2AM. |
| |
| ▲ | ryandrake 4 hours ago | parent | prev | next [-] | | I don't understand how all these companies issue these sorts of policies in lock-step with each other. The same happened with "Return To Office". All of a sudden every company decided to kill work from home within the same week or so. Is there some secret CEO cabal that meets on a remote island somewhere to coordinate what they're going to all make workers do next? | | |
| ▲ | ambicapter 3 hours ago | parent | next [-] | | CEOs are ladder climbers. The main skill in ladder climbing is being in tune with what the people around them are thinking, and doing what pleases/maximizes other's approval of the job they are doing. | |
| ▲ | asa400 2 hours ago | parent | prev | next [-] | | It's extremely human behavior. We all do it to some degree or another. The incentives work like this: - If all your peers are doing it and you do it and it doesn't work, it's not your fault, because all your peers were doing it too. "Who could have known? Everyone was doing it."
- If all your peers _aren't_ doing it and you do it and it doesn't work, it's your fault alone, and your board and shareholders crucify you. "You idiot! What were you thinking? You should have just played it safe with our existing revenue streams."
And the one for what's happening with RTO, AI, etc.: - If all your peers are doing it and you _don't do it_ and it _works_, your board crucifies you for missing a plainly obvious sea change to the upside. "You idiot! How did you miss this? Everyone else was doing it!"
Non-founder/mercenary C-suites are incentivized to be fundamentally conservative by shareholders and boards. This is not necessarily bad, but sometimes it leads to funny aggregate behavior, like we're seeing now, when a critical mass of participants and/or money passes some arbitrary threshold resulting in a social environment that makes it hard for the remaining participants to sit on the sidelines.Imagine a CEO going to their board today and going, "we're going to sit out on potentially historic productivity gains because we think everyone else in the United States is full of shit and we know something they don't". The board responds with, "but everything I've seen on CNBC and Bloomberg says we're the only ones not doing this, you're fired". | |
| ▲ | chung8123 4 hours ago | parent | prev | next [-] | | It is investor sentiment and FOMO. If your investors feel like AI is the answer you will need to start using AI. I am not as negative on AI as the rest of the group here though. I think AI first companies will out pace companies that never start to learn the AI muscle. From my prospective these memos mostly seem reasonable. | | |
| ▲ | teeklp 2 hours ago | parent | next [-] | | I agree that a lot of the current push is driven by investor sentiment and a degree of FOMO. If capital markets start to believe AI is table stakes, companies don’t really have the option to ignore it anymore.
That said, I’m not bearish on AI either. I think there’s a meaningful difference between chasing AI for signaling purposes and deliberately building an “AI muscle” inside the organization. Companies that start learning how to use, govern, and integrate AI thoughtfully are likely to outpace those that never engage at all.
From that perspective, most of these memos feel fairly reasonable to me. They’re less about declaring AI as a silver bullet and more about acknowledging that standing still carries its own risk. | |
| ▲ | whiplash451 3 hours ago | parent | prev | next [-] | | You might be misreading negative sentiment towards poor leadership as negative sentiment towards AI. | |
| ▲ | goatlover 3 hours ago | parent | prev [-] | | If AI is the answer, then there's no reason for a top-down mandate like this. People will just start using as they see fit because it helps them do their jobs better, instead of it being forced on them, which doesn't sound much like AI is the answer investors thought it was. | | |
| ▲ | basket_horse an hour ago | parent [-] | | No, because as discussed AI also changes the nature of your job in a way that might be negative to a worker, even if it’s more productive. Ie, it may be more fun to ride a horse to your friends house, but it’s not faster than a car. Or as the previous example, it may be more enjoyable to make a shoe by hand, but it’s less productive than using an assembly line |
|
| |
| ▲ | artnoir 4 hours ago | parent | prev | next [-] | | I have wondered the exact same thing. It's uncanny how in-sync they all are. I can only suppose that the trend trickles down from the same few influential sources. | |
| ▲ | steveBK123 4 hours ago | parent | prev | next [-] | | > Is there some secret CEO cabal that meets on a remote island somewhere I mean.. recent FBI files of certain emails would imply.. probably, yes. | | | |
| ▲ | 3 hours ago | parent | prev [-] | | [deleted] |
| |
| ▲ | collingreen 4 hours ago | parent | prev | next [-] | | > FOMOing at the mouth This is a great line - evocative, funny, and a bit o wordplay. I think you might be right about the behavior here; I haven't been able to otherwise understand the absolute forcing through of "use AI!!" by people and upon people with only a hazy notion of why and how. I suppose it's some version of nuclear deterrence or Pascal's wager -- if AI isn't a magic bullet then no big loss but if it is they can't afford not to be the first one to fire it. | | |
| ▲ | steveBK123 4 hours ago | parent [-] | | I think one thing that I noticed this week in terms of "eye of the beholder" view on AI was the Goldman press release. Apparently Anthropic has been in there for 6 months helping them with some back office streamlining and the outcome of that so far has been.. a press release announcing that they are working on it! A cynic might also ask if this is simply PR for Goldman to get Anthropic's IPO mandate. I think people underestimate the size/scope/complexity of big company tech stacks and what any sort of AI transformation may actually take. It may turn into another cottage industry like big data / cloud / whatever adoption where "forward deployed / customer success engineers" are collocated by the 1000s for years at a time in order to move the needle. |
| |
| ▲ | hnthrow0287345 4 hours ago | parent | prev [-] | | At least they are consistently applying this to all roles instead of only making tech roles suffer through it like they do with interview processes |
|
|
| ▲ | coldpie 4 hours ago | parent | prev | next [-] |
| I'm so glad I'm nearer the end of my career than the beginning. Can't wait to leave this industry. I've got a stock cliff coming up late this summer, probably a good time to get out and find something better to do with my life. |
| |
| ▲ | actionfromafar 4 hours ago | parent [-] | | Then, you might even tinker with some AI stuff on your own terms, you never know. :) Or install a landline (over 5G because that's how you do it nowadays) and call it a day. :-) | | |
| ▲ | coldpie 4 hours ago | parent [-] | | > Then, you might even tinker with some AI stuff on your own terms, you never know Indeed! I'm not like dead set against them. I just find they're kind of a bad tool for most jobs I've used them for and I'm just so goddamn tired of hearing about how revolutionary this kinda-bad tool is. | | |
| ▲ | blindriver 4 hours ago | parent [-] | | If you're finding their a bad tool for most jobs you're using them for, you're probably being closed minded and using it wrong. The trick with AI these days is to ask it to do something that you think is impossible and it will usually do a pretty decent job at it, or at least close enough for you to pick up or to guide it further. I was a huge AI skeptic but since Jan 2025, I have been watching AI take my job away from me, so I adapted and am using AI now to accelerate my productivity. I'm in my 50s and have been programming for 30 years so I've seen both sides and there is nothing that is going to stop it. | | |
| ▲ | coldpie 4 hours ago | parent | next [-] | | I try them a few times a month, always to underwhelming results. They're always wrong. Maybe I'll find an interesting thing to do with them some day, I dunno. It's just not a fun or interesting tool for me to learn to use so I'm not motivated. I like deterministic & understandable systems that always function correctly; "smart" has always been a negative term in marketing to me. I'm more motivated to learn to drive a city bus or walk a postal route or something, so that's the direction I'm headed in. | |
| ▲ | toraway 4 hours ago | parent | prev | next [-] | | Okay, I use OpenCode/Codex/Gemini daily (recently cancelled my personal CC plan given GPT 5.2/3 High/XHigh being a better value, but still have access to Opus 4.5/6 at work) and have found it can provide value in certain parts of my job and personal projects. But the evangelist insistence that it literally cannot be a net negative in any contexts/workflows is just exhausting to read and is a massive turn-off. Or that others may simply not benefit the same way with that different work style. Like I said, I feel like I get net value out of it, but if my work patterns were scientifically studied and it turned out it wasn't actually a time saver on the whole I wouldn't be that surprised. There are times where after knocking request after request out of the park, I spend hours wrangling some dumb failures or run into spaghetti code from the last "successful" session that massively slow down new development or require painful refactoring and start to question whether this is a sustainable, true net multiplier in the long term. Plus the constant time investment of learning and maintaining new tools/rules/hooks/etc that should be counted too. But, I enjoy the work style personally so stick with it. I just find FOMO/hype inherently off-putting and don't understand why random people feel they can confidently say that some random other person they don't know anything about is doing it wrong or will be "left behind" by not chasing constantly changing SOTA/best practices. | |
| ▲ | blibble 2 hours ago | parent | prev | next [-] | | > you're probably being closed minded and using it wrong > I was a huge AI skeptic but since Jan 2025, > I'm in my 50s and have been programming for 30 years > there is nothing that is going to stop it. I need to turn this into one of those checklists like the anti-spam one and just paste it every time we get the same 5 or 6 clichés | |
| ▲ | n4r9 4 hours ago | parent | prev [-] | | Maybe not everyone finds them as useful for their everyday tasks as you do? Software development is quite a broad term. |
|
|
|
|
|
| ▲ | ej88 4 hours ago | parent | prev | next [-] |
| 1. execs likely have spend commits and pressure from the board about their 'ai strategy', what better way to show we're making progress than stamping on some kpis like # of agents created? 2. most ai adoption is personal. people use whichever tools work for their role (cc / codex / cursor / copilot (jk, nobody should be using copilot) 3. there is some subset of ai detractors that refuse to use the tools for whatever reason the metrics pushed by 1) rarely account for 2) and dont really serve 3) i work at one of the 'hot' ai companies and there is no mandate to use ai... everyone is trusted to use whichever tools they pick responsibly which is how it should be imo |
| |
| ▲ | Octoth0rpe 4 hours ago | parent | next [-] | | > (cc / codex / cursor / copilot (jk, nobody should be using copilot) I seem to be using claude (sonnet/opus/haiku, not cc though), and have the option of using codex via my copilot account. Is there some advantage to using codex/claude more directly/not through copilot? | | |
| ▲ | ej88 4 hours ago | parent [-] | | copilot is a much worse harness, although recently improvements in base model intelligence have helped it a bit if you can, use cc or codex through your ide instead, oai and anthropic train on their own harnesses, you get better performance | | |
| ▲ | Octoth0rpe 4 hours ago | parent [-] | | I'm currently using opus in Zed via copilot (I think that's what you're recommending?) and tbh couldn't be happier. It's hard to imagine what better would look like. | | |
| ▲ | ej88 4 hours ago | parent [-] | | oh, i meant copilot as in microsoft copilot in vscode. i havent used zed so can't speak to it but if it works for you it works! |
|
|
| |
| ▲ | apercu 4 hours ago | parent | prev [-] | | The KPI problem is systemic and bigger than just Gen-AI, it’s in everything these days. Actual governance starts by being explicit about business value. If you can’t state what a thing is supposed to deliver (and how it will be measured) you don’t have a strategy, only a bunch of activity. For some reason the last decade or so we have confused activity with productivity. (and words/claims with company value - but that's another topic) |
|
|
| ▲ | SkyPuncher an hour ago | parent | prev | next [-] |
| I'm so happy I work at a sane company. We're pushing the limits of AI and everyone sees the value, but we also see the danger/risks. I'm at the forefront of agentic tooling use, but also know that I'm working in uncharted territory. I have the skills to use it safely and securely, but not everyone does. |
|
| ▲ | gtowey 4 hours ago | parent | prev | next [-] |
| Leadership loves AI more than anything they have ever loved before. It's because for them, the fawning, sycophantic, ego-stroking agents who cheerfully champions every dumb idea they have and helps them realize it with spectacular averageness, is EXACTLY what they've always expected to receive from their employees. |
| |
|
| ▲ | SketchySeaBeast 4 hours ago | parent | prev | next [-] |
| This feels like a construction company demanding that everyone, from drywaller to admin assistant, go out and buy a drill. |
| |
| ▲ | munk-a 4 hours ago | parent | next [-] | | Can I modify your example to: Demanding everyone, from drywaller to admin assistant go out and buy a purple colored drill, never use any other colored drill, and use their purple drill for at least fifty minutes a day (to be confirmed by measuring battery charge). | | |
| ▲ | SketchySeaBeast 4 hours ago | parent [-] | | Better, yeah. | | |
| ▲ | munk-a 4 hours ago | parent [-] | | Awesome, with that new policy we'll be sure to justify my purple drill evangelist role by showing that our average employee is dependent on purple drills for at least 1/8th of their workload. Who knew that our employees would so quickly embrace the new technology. Now the board can't cut me! |
|
| |
| ▲ | steveBK123 3 hours ago | parent | prev | next [-] | | It's really cascaded down too. Each department head needs to incorporate into their annual business plan how they are going to use a drill as part of their job in accounting/administration/mailroom. Throughout the year, must coordinate training & enforce attendance for the people in their department with drill training mandated by the Head of Drilling. And then they must comply with and meet drilling utilization metrics in order to meet their annual goals. Drilling cannot be fail, it can only be failed. | |
| ▲ | steveBK123 4 hours ago | parent | prev | next [-] | | This is literally happening in non-tech finance firms where people in non-tech roles are being judged on their AI adoption. | |
| ▲ | MattGaiser 4 hours ago | parent | prev [-] | | Some companies swear by this. CP Rail is notorious for training everyone to drive a train. | | |
| ▲ | SketchySeaBeast 4 hours ago | parent [-] | | That kind of makes sense philosophically if your business is trains, but I don't think that their business was AI agents. Although given they have a VP of AI, I have no idea. What a crazy title. |
|
|
|
| ▲ | palmotea 4 hours ago | parent | prev | next [-] |
| > We also have a dashboard with AI Adoption per developer, that is being used to surveil the teams lagging on the topic. All very depressing. Enforced use means one of two things: 1. The tool sucks, so few will use it unless forced. 2. Use of the tool is against your interests as a worker, so you must be coerced to fuck yourself over (unless you're a software engineer, in which case you may excitedly agree to fuck yourself over willingly, because you're not as smart as you think you are). |
| |
| ▲ | SketchySeaBeast 4 hours ago | parent | next [-] | | 3. They discovered it's something they can measure so they made a metric about it. | | |
| ▲ | steveBK123 4 hours ago | parent [-] | | 4. They heard from their golf buddy who heard from his racquetball buddy that this other CTO at this other shop is saving lots of money with AI | | |
| ▲ | upupupandaway 4 hours ago | parent [-] | | I know you're speaking half in jest but the C-suite of my area actually used a tweet by an OpenAI executive as the agenda for an AI brainstorm meeting. | | |
| ▲ | munk-a 4 hours ago | parent | next [-] | | Well that's inspiring. If you're going to follow anyone right now be sure to follow someone from the company that has committed to spending a trillion dollars without ever having a profitable product. Those are the folks who know what good business is! | |
| ▲ | SketchySeaBeast 4 hours ago | parent | prev | next [-] | | It'll never cease to amaze me how many powerful people can't tell advice from advertising. | |
| ▲ | steveBK123 4 hours ago | parent | prev [-] | | I am at less than half jest here. I have friends who are finance industry CTOs, and they have described it to me in realtime as CEO FOMO they need to manage .. Remember tech is sort of an odd duck in how open people are about things and the amount of cross pollination. Many industries are far more secretive and so whatever people are hearing about competitors AI usage is 4th hand hearsay telephone game. edit: noteworthy someone sent yet another firmwide email about AI today which was just linking to some twitter thread by a VC AI booster thinkbro |
|
|
| |
| ▲ | tbrownaw 4 hours ago | parent | prev [-] | | Or it has an annoying learning curve. |
|
|
| ▲ | mdavid626 4 hours ago | parent | prev | next [-] |
| Reminds me of those little gadgets, which move your mouse, so that you show up online on Slack. I’d just add a cron job to burn some tokens. |
| |
| ▲ | munk-a 3 hours ago | parent [-] | | That sounds like a lot of work - maybe you could burn some tokens asking AI to write a cron to burn some tokens for you? |
|
|
| ▲ | monkaiju 4 hours ago | parent | prev | next [-] |
| That sounds awful... Thankfully our CTO is quite supportive of our teams anti-AI policy and is even supportive of posting our LLM-ban on job postings. I honestly dont think that I could operate in an environment with any sort of AI mandate... |
| |
|
| ▲ | Rover222 4 hours ago | parent | prev [-] |
| I mean get onboard or fall behind, that's the situation we're all in. It can also be exciting. If you think it's still just slop and errors when managed by experienced devs, you're already behind. |
| |
| ▲ | collingreen 4 hours ago | parent | next [-] | | The obvious pulling ahead from early AI adopters/forcers will happen any moment now... any moment | | |
| ▲ | Rover222 4 hours ago | parent [-] | | It's not obvious because the multiplier effect of AI is being used to reduce head count more than to drastically increase net output of a team. Which yeah is scary, but my point is if you don't see any multiplier effect from using that latest AI tools, you are either doing a bad job of using them (or don't have the budget, can't blame anyone for that), or are maybe in some obscure niche coding world? | | |
| ▲ | emp17344 2 hours ago | parent [-] | | >the multiplier effect of AI is being used to reduce head count more than to drastically increase net output of a team This simply isn’t how economics works. There is always additional demand, especially in the software space. Every other productivity-boosting technology has resulted in an increase in jobs, not a decrease. | | |
| ▲ | Rover222 13 minutes ago | parent [-] | | Well that's certainly and obviously how it's working at the moment in the software industry. We're in the transition between traditional coding jobs and agentic managers (or something like that) |
|
|
| |
| ▲ | coldpie 4 hours ago | parent | prev | next [-] | | I try these things a couple times a month. They're always underwhelming. Earlier this week I had the thing work tells me to use (claude code sonnet 4? something like that) generate some unit tests for a new function I wrote. I had a number of objections about the utility of the test cases it chose to write, but the largest problem was that it assigned the expected value to a test case struct field and then... didn't actually validate the retrieved value against it. If you didn't review the code, you wouldn't know that the test it wrote did literally nothing of value. Another time I asked it to rename a struct field across a the whole codebase. It missed 2 instances. A simple sed & grep command would've taken me 15 seconds to write and do the job correctly and cost $~0.00 compute, but I was curious to see if the AI could do it. Nope. Trillions of dollars for this? Sigh... try again next week, I guess. | | |
| ▲ | floren 4 hours ago | parent | next [-] | | Twice now in this same story, different subthreads, I've seen AI dullards declaring that you, specifically, are holding it wrong. It's delightful, really. | | |
| ▲ | solidasparagus 3 hours ago | parent | next [-] | | I don't really care if other people want to be on or off the AI train (no hate to the gp poster), but if you are on the train and you read the above comment, it's hard not to think that this person might be holding it wrong. Using sonnet 4 or even just not knowing which model they are using is a sign of someone not really taking this tech all that seriously. More or less anyone who is seriously trying to adopt this technology knows they are using Opus 4.6 and probably even knows when they stopped using Opus 4. Also, the idea that you wouldn't review the code it generated is, perhaps not uncommon, but I think a minority opinion among people who are using the tools effectively. Also a rename falls squarely in the realm of operations that will reliably work in my experience. This is why these conversations are so fruitless online - someone describes their experience with an anecdote that is (IMO) a fairly inaccurate representation of what the technology can do today. If this is their experience, I think it's very possible they are holding it wrong. Again, I don't mean any hate towards the original poster, everyone can have their own approach to AI. | | |
| ▲ | coldpie 2 hours ago | parent [-] | | Yeah, I'm definitely guilty of not being motivated to use these tools. I find them annoying and boring. But my company's screaming that we should be using them, so I have been trying to find ways to integrate it into my work. As I mentioned, it's mostly not been going very well. I'm just using the tool the company put in front of me and told me to use, I don't know or really care what it is. |
| |
| ▲ | sigseg1v an hour ago | parent | prev [-] | | "Hey boss, I tried to replace my screwdriver with this thing you said I have to use? Milwaukee or something? When I used it, it rammed the screw in so tight that it cracked the wood." ^ If someone says that they are definitely "holding it wrong", yes. If they used it more they would understand that you use the clutch ring to the appropriate setting to avoid this. What you don't do, is keep using the screwdriver while the business that pays you needs 55 more townhouses built. | | |
| ▲ | coldpie an hour ago | parent [-] | | No need to be mean. It's not living up to the marketing (no surprise), but I am trying to find a way to use these things that doesn't suck. Not there yet, but I'll keep trying. |
|
| |
| ▲ | Rover222 4 hours ago | parent | prev [-] | | Try Opus? | | |
| ▲ | coldpie 4 hours ago | parent [-] | | Eh, there's a new shiny thing every 2 months. I'm waiting for the tools to settle down rather than keep up with that treadmill. Or I'll just go find a new career that's more appealing. | | |
| ▲ | Rover222 4 hours ago | parent [-] | | It seems that the rate of change will only accelerate. | | |
| ▲ | coldpie 3 hours ago | parent [-] | | I dunno. At some point the people who make these tools will have to turn a profit, and I suspect we'll find out that 98% of the AI industry is swimming naked. | | |
| ▲ | Rover222 2 hours ago | parent [-] | | Yeah I think it'll consolidate around one or two players. Mostly likely Xai, even though they're behind at the moment. No one can compete with the orbital infrastructure, if that works out. Big if. That's all a different topic. But I feel you, part of me wants to quit too, but can't afford that yet. |
|
|
|
|
| |
| ▲ | driverdan 2 hours ago | parent | prev | next [-] | | Fall behind what? Writing code is only one part of building a successful product and business. Speed of writing code is often not what bottlenecks success. | | |
| ▲ | Rover222 12 minutes ago | parent [-] | | Yes, the execution part has become cheap, but planning and strategizing is not much easier. But devs and organizations that keep their head in the sand will fall behind on one leg of that stool. |
| |
| ▲ | irishcoffee 4 hours ago | parent | prev | next [-] | | > I mean get onboard or fall behind, that's the situation we're all in. It can also be exciting. I am aware of a large company that everyone in the US has heard of, planning on laying off 30% of their devs shortly because they expect a 30% improvement in "productivity" from the remaining dev team. Exciting indeed. Imagine all the divorces that will fall out of this! Hopefully the kids will be ok, daddy just had an accident, he won't be coming home. If you think anything that is happening with the amount of money and bullshit enveloping this LLM disaster, you should put the keyboard down for a while. | |
| ▲ | g947o an hour ago | parent | prev [-] | | Anyone with more than 2 years of professional software engineering experience can tell this is completely nonsense. |
|