|
| ▲ | welshwelsh 6 hours ago | parent | next [-] |
| I think it's 5. I was very impressed when I first started using AI tools. Felt like I could get so much more done. A couple of embarrassing production incidents later, I no longer feel that way. I always tell myself that I will check the AI's output carefully, but then end up making mistakes that wouldn't have happened if I wrote the code myself. |
| |
| ▲ | enobrev 4 hours ago | parent [-] | | This is what slows me down most. The initial implementation of a well defined task is almost always quite fast. But then it's a balance of either... * Checking it closely myself, which sometimes takes just as long as it would have taken me to implement it in the first-place, with just about as much cognitive load since I now have to understand something I didn't write * OR automating the checking by pouring on more AI, and that takes just as long or longer than it would have taken me to check it closely myself. Especially in cases where suddenly 1/3 of automated tests are failing and it either needs to find the underlying system it broke or iterate through all the tests and fix them. Doing this iteratively has made the overall process for an app I'm trying to implement 100% using LLMs to take at least 3x longer than I would have built it myself. That said, it's unclear I would have kept building this app without using these tools. The process has kept me in the game - so there's definitely some value there that offsets the longer implementation time. |
|
|
| ▲ | ACCount37 6 hours ago | parent | prev | next [-] |
| "People use AI to do the same tasks with less effort" maps onto what we've seen with other types of workplace automation - like Excel formulas or VBA scripts. Why report to your boss that you managed to get a script to do 80% of your work, when you can just use that script quietly, and get 100% of your wage with 20% of the effort? |
|
| ▲ | DenisM 4 hours ago | parent | prev | next [-] |
| 6. It’s now easier to get something off the ground but structural debt accumulates invisibly. The inevitable cleanup operation happens outside of the initial assessed productivity window. If you expand the window across time and team boundaries the measured productivity reverts to the mean. This options is insidious in that not only people initially asked about the effect are initially oblivious, it is very beneficial for them to deny the outcome altogether. Individual integrity may or may not overcome this. |
|
| ▲ | thinkmassive 3 hours ago | parent | prev | next [-] |
| What's the difference between 1 & 5? I've personally witnessed every one of these, but those two seem like different ways to say the same thing. I would fully agree if one of them specified a negative impact to productivity, and the other was net neutral but artificially felt like a gain. |
|
| ▲ | rsynnott 5 hours ago | parent | prev | next [-] |
| (1) seems very plausible, if only because that is what happens with ~everything which promises to improve productivity. People are really bad at self-evaluating how productive they are, and productivity is really pretty hard to externally measure. |
|
| ▲ | mlinhares 6 hours ago | parent | prev | next [-] |
| Why not all? I've seen them all play out. There's also the people that are downstream of AI slop that feel less productive because now they have to clean up the shit other people produced. |
| |
| ▲ | Pannoniae 6 hours ago | parent [-] | | You're right, it kinda depends on the situation itself! And the downstream effects. Although, I'd argue that the one you're talking about isn't really caused by AI itself, that's squarely a "I can't say no to the slop because they'll take my head off" problem. In healthy places, you would just say "hell no I'm not merging slop", just as you have previously said "no I'm not merging shit copypasted from stackoverflow". |
|
|
| ▲ | pydry 6 hours ago | parent | prev | next [-] |
| >1. AI doesn't improve productivity and people just have cognitive biases. (logical, but I also don't think it's true from what I know...) It is from what Ive seen. It has the same visible effect on devs as a slot machine giving out coins when it spits out something correct. Their faces light up with delight when it finally nails something. This would explain the study that showed a 20% decline in actual productivity where people "felt" 20% more productive. |
|
| ▲ | fritzo 4 hours ago | parent | prev | next [-] |
| 2,3,4. While my agent refactors code, I do housework: fold laundry, wash dishes, stack firewood, prep food, paint the deck. I love this new life of offering occasional advice, then walking around and using my hands. |
|
| ▲ | HardCodedBias 5 hours ago | parent | prev [-] |
| (3) and (4) are likely true. In theory competition is supposed to address this. However, our evaluation processes generally occur on human and predictable timelines, which is quite slow compared to this impulse function. There was a theory that inter firm competition could speed this clock up, but that doesn't seem plausible currently. Almost certainly AI will be used, extensively, for reviews going forward. Perhaps that will accelerate the clock rate. |