| ▲ | dkarl 4 hours ago |
| I kind of get what they're thinking in trying to make sure all engineers use AI. For myself, and for the engineers working with me, I saw everyone go through an initial aversion and resistance to AI, and then an instant productivity boost when we started using them. So there's definitely a good reason to get everybody to start using AI. You don't want a good engineer resisting AI indefinitely if you know it will make them more productive. Incentivizing people who are already using AI to use as many tokens as possible does seem a little crazy, though. |
|
| ▲ | swatcoder 3 hours ago | parent | next [-] |
| It's worth reflecting on why it's so hard to convince hold outs to discover how AI might help them. The fundamental issue is that there really aren't many convincing demonstrations that hold outs can relate to and there remains basically no evidence of real value gained. Users attest to higher productivity and point to material but intermediate factors like token use, generated lines of code, pr counts, etc, but there doesn't seem to be a convincing revolution in the quantity or quality of mature software being delivered. Combine that puzzling impressions of outcomes with a sense, for many, that they don't feel like they have a personal problem that warrants a new tool, and you end up with a pretty earnest and defensible indifference. To get hold out engineers using AI, the industry needs to be focused on demonstrating relatable workflow improvements and demonstrating practical improvements to finished work product. Instead, policies like token use incentives just rely on luring them into pulling the slot machine handle with the expectation that once they do, they'll join the cadre of other converts who justify their transition with subjective improvements and intermediate metrics. |
| |
| ▲ | dkarl 2 hours ago | parent | next [-] | | Unfortunately, a convincing demonstration to convince a skeptical colleague would require measuring developer productivity. Among skeptics, I've only seen people won over by using it themselves, because when they use AI for their own work, they invest the time to review the code, understand it, and assess its quality by their own standards. That's how people learn to trust AI coding assistance. | | |
| ▲ | recursive 2 hours ago | parent [-] | | Perhaps amusingly, I think I actually trusted it more before I started using it. Specifically because of my assessment of its quality, including things like factual correctness. |
| |
| ▲ | crabbone 3 hours ago | parent | prev | next [-] | | Here's one selling factor from the experience I'm experiencing right now: Others will use AI, and it will make your life miserable. You need to know enough about AI to be able to fight back. The experience: one employee, self-selected, assigned themselves to a task of configuring integration with MySQL HA deployment. They produced a mountain of code in a short month (we are talking about close to a hundred thousands lines of Python code). And they decided to go with Oracle's tools, instead of Galera... Everything this employee produces is, quite obviously, AI-generated. Also, in the initial stages, they worked on their project completely alone: no reviews. To give some sense of size of this insanity: one of the configuration scripts I'm working with now is a 9K+ loc of Python that's supposed to run from `mysqlsh`. About half of it is module-level variables. It will take many months to restructure this "prototype" by hand. It's a pain to read and to navigate. GitLab UI has a perceivable lag just trying to display the script, forget about diffs. I will absolutely need AI to try to make sense of it (I'm not allowed to fix it). But, and if it ever comes to fixing, I can't imagine this to be done without automation of some sort. Unfortunately, AI generates problems that, sometimes, only AI can fix. :( | |
| ▲ | ijidak 3 hours ago | parent | prev [-] | | > It's worth reflecting on why it's so hard to convince hold outs to discover how AI might help them I have. My conclusion is... humans are deeply irrational when it comes to rapid change. Egg or olive oil prices spike, humans out an entire government. The rate of immigration spikes, humans throw them into camps and break useful treaties. Most of the resistance I've observed amongst engineers is resistance to change generally. And then digging in when challenged. | | |
| ▲ | dingaling 2 hours ago | parent | next [-] | | > resistance to change generally Nah, software engineers were always butterflies fluttering from one language or framework to the Next Hot Thing. Change was part of the job, if you didn't keep up you fell behind and atrophied. Resistance to AI is, I think, more because it is seen as an existential threat, or because it's something whose ultimate long-term outcome is still undefined. It's going to be either a benefit or a hazard, and we don't yet know whether we'll need Bladerunners to rein it in. | |
| ▲ | autoexec 2 hours ago | parent | prev [-] | | > Most of the resistance I've observed amongst engineers is resistance to change generally. Most engineers I've known are enthusiastic when given the opportunity to play around with a new toy. What they don't like is anything being forced on them.
There's nothing irrational about that. They've often invested a lot of time into optimizing their workflows. I've also found that if something actually makes their work easier, you will never have to twist their arm to make them use it. They'll apply it everywhere it helps. They'll even try using it in places and in ways it was never intended for. If they're digging in, you likely haven't made a very compelling case for your changes. |
|
|
|
| ▲ | com2kid 3 hours ago | parent | prev | next [-] |
| There is a limit somewhere, but I keep finding more and more ways to use AI. Not just coding, but things like "here is my teams mandate, go through all my company's slack channels, linear tasks, notion pages, and recent merges in got, summarize any work other teams are doing that intersect with my team's work." That'll burn a lot of tokens. Set that up to run once or twice a week and give a report. |
| |
| ▲ | thinkharderdev 3 hours ago | parent [-] | | Sure, findings ways to burn tokens is not hard. Even finding ways to burn tokens on things (like your example) which are actually useful is not hard. But what is the ROI on that from the company perspective. I mean, you could have also hired an intern to do the job of collating this report every week. But if you went to your boss and asked to hire someone to do something, they would, reasonably, ask what the value of that thing is and whether it justifies more headcount. But we're in this bizarro world where the bosses are basically saying "go hire more people, even if you don't have specific high-value things for them to do. Just create make-work jobs for them!" It's wild. |
|
|
| ▲ | recursive 2 hours ago | parent | prev [-] |
| I've been using it for many months. I still haven't gotten any kind of boost. If I'm going to get ranked on token use though, best believe I'll be using the optimal quantity of tokens. |