Remix.run Logo
tomwojcik 7 hours ago

Hi HN, I've been using Claude Code heavily for the last year. Recently I've noticed a shift in sentiment among peers, here on HN, and over on /r/ExperiencedDevs. I wrote down some thoughts on the hidden costs of using AI too much that are not obvious, yet there's no concrete data yet. I tried to pull together data from a few different places to articulate something I think a lot of us are experiencing right now. I'd love to hear your thoughts

mold_aid 6 hours ago | parent | next [-]

Fun read. Audience can see Xeno's Arrow emerging on your "we're on the way to AGI!" timeline, a nice visual representation of "the trajectory is real, but the timeline keeps slipping."

But I'm gonna say I've always seen "[retail software] is just a tool" as an odd statement. I've heard it a lot over the last 20 years. "Just" a tool. Why always phrased like that? How can we be overthinking the role of a tool while you're in the middle of a multi-page essay about how it causes cognitive decline?

Nobody frets about the effect of a screwdriver on some IT rando's ability to do other computer stuff if on occasion they're screwing something into a rack. Seems odd to be so consistent about privileging the concept of a "tool" when you're saying that tool is on its way to thought.

>I’m addicted to prompting, I get high from it

yikes, but I did appreciate this honesty. Though, again: "this hash pipe is just a tool" did not appear after this statement

Also - isn't addiction behavioral, as opposed to strictly neurological? Maybe you should do a follow-up on the behavioral effects of a situation like "There’s no spark in you anymore." If you found a new identity that wasn't "I'm a prompt addict," what would it be?

pindab0ter 2 hours ago | parent | prev | next [-]

This post resonates with me. I recognize everything you said except for the metrics part, since my employer luckily doesn't do that.

It's addictive. You're fast, efficient, you feel like you're in control. All while you're slowly losing grip.

I love how nuanced your takes are. The biggest challenge of this new programming paradigm is not to see how you can use it to its fullest extent. It is to find out what a sustainable pace is, both sort and long term.

It is hard to understate how difficult that is.

gengstrand 3 hours ago | parent | prev | next [-]

Thanks for writing this blog. Others are also starting to notice these impacts and writing about them. I believe that it is important for more voices to be heard.

What resonated most for me was the "Finding Your Threshold" section. Your "Developers need the dopamine hit of creation." is memorable. I have also blogged about this phenomenon at https://www.exploravention.com/blogs/soft_arch_agentic_ai/ but I frame it more as how leaders can help the organization arrive at a healthy and sustainable balance between writing and reviewing code.

skyberrys 2 hours ago | parent [-]

Hey your write up is good too. I thought the specific application to software architecture apart from software engineering is a valuable contribution, because software architecture is where if the AI tool goes wrong the repercussions have a longer term effect.

radleta 4 hours ago | parent | prev | next [-]

Thanks, I like it! Thanks for taking time putting it together.

throwaway346434 4 hours ago | parent | prev | next [-]

Basically had the same urge to write about this problem, prompted by the exact same comments around mental fatigue this week. Only got to the research stage.

Here's some of the literature I dug up when looking at what is the potential risk to cognition when you don't enjoy what you are doing.

Working memory is "gated"; you selectively process information relevant to a goal - or why you need to turn the radio off to reverse a car. (Numerous papers take it as a given, can't find a specific one developing the exact model of gating)

On working memory and trainability: https://www.nature.com/articles/nrn.2016.43 Working memory is (potentially) dopamine responsive, and expanded by use/training.

On building mental models, writing something down activates more of your brain than typing (cognitive offloading): https://www.scientificamerican.com/article/why-writing-by-ha...

I would argue that typing is better than just reading, and programming requires some extra elements - as you cut and paste to rearrange, run tests, iterate, spatially navigate to where various areas of your code is; so is likely closer to the findings around handwriting than the study. But I don't have specific studies on that.

On reward ($) as a proxy for enjoyment/flow state; and motivation; these two used similar basic designs to experiments https://www.nature.com/articles/s41598-025-09949-1

"Participants performed a delayed-estimation orientation working memory (WM) task with reward cues indicating reward levels at the beginning of trials. The results revealed that motivational incentives significantly improved WM performance and increased pupillary dilation during maintenance. These findings provide evidence for the modulation of WM maintenance by reward through enhanced top-down cognitive control processes."

https://www.jneurosci.org/content/39/43/8549 > "During the task, the prospect of reward varied from trial to trial. Participants made faster, more accurate judgements on high-reward trials. Critically, high reward boosted neural coding of the active task rule, and the extent of this increase was associated with improvements in task performance"

You can also infer from their experiments that low reward = less care exercised.

I feel like a lot of these papers aren't really surprising, but they do measure something that many people have probably felt is true but can't prove.

While these papers don't talk about AI or decline in skills specifically, it's reasonable to say you don't get many of the benefits when it is low reward/passive task execution; where you are leaving review comments that are just reprompting a machine - you know it's not a person, so it feels even lower value to engage than a standard code review might.

I think overall, the rule of thumb around when to use AI should be closely linked to how painful / low reward a task is likely to be. Debugging something with a 10 minute build/test loop and a mystery problem that is not easy to control? AI party. Writing a complex but fun set of business rules? Run it on your wetwear while it is still giving you a sugar hit. An "easy" bug you have stuffed up fixing three times in a row? Push through a bit of discomfort and frustration; but fall back to tooling when you have invested reasonable efforts and are starting to feel slightly fatigued.

MattRix 5 hours ago | parent | prev [-]

I’d encourage you to read this post: https://factory.strongdm.ai

It hit the front page here a few weeks ago, but I don’t think most people took it seriously and got hung up on the $1000/day in tokens part.

I am convinced that approach is the future of nearly all software development. It’s basically about how if you’re willing to spend enough tokens, these current models can already complete any software task. With the right framework in place, you don’t need to think about the code at all, only the results.

I really don’t like that the industry is heading this way, but the more I consider that approach, the more I’m convinced it is inevitable.