Remix.run Logo
quotemstr 5 days ago

"Studies" like this bite at the ankles of every change in information technology. Victorians thought women reading too many magazines would rot their minds.

Given that AI is literally just words on a monitor just like the rest of the internet, I have a strong prior it's not "reprogram[ming]" anyone's mind, at least not in some manner that, e.g. heavy Reddit use might.

stego-tech 5 days ago | parent | next [-]

That’s a pretty spicy take for first thing in the morning. The confidence with which you assert a repeatedly proven facile argument is…unenviable. “Fractal wrongness,” I’ve seen it called.

We have decades of research - brain scans, studies, experiments, imaging, stimuli responses, etc - proving that when a human no longer has to think about performing a skill, that skill immediately begins to atrophy and the brain adapts accordingly. It’s why line workers at McDonalds don’t actually learn how to properly cook food (it’s all been procedured-out and automated where possible to eliminate the need for critical thinking skills, thus lowering the quality of labor needed to function), and it’s why - at present - we’re effectively training a cohort of humans who lack critical thinking and reasoning skills because “that’s what the AI is for”.

This is something I’ve known about long before the current LLM craze, and it’s why I’ve always been wary or hostile to “aggressively helpful” tools like some implementations of autocorrect, or some driving aides: I am not just trying to do a thing quickly, I am trying to do it well, and that requires repeatedly practicing a skill in order to improve.

Studies like these continue to support my anxiety that we’re dumbing down the best technical generation ever into little more than agent managers and prompt engineers who can’t solve their own problems anymore without subscribing to an AI service.

quotemstr 5 days ago | parent [-]

Learning and habit formation are not "reprogramming". If you define "reprogramming" as anything that updates neuron weights, the term encompasses all of life and becomes useless.

My point is that I don't see LLM's effect on the brain as being anything more than the normal experience we have of living and that the level of drama the headline suggests is unwarranted. I don't believe in infohazards.

Might they result in skill atrophy? For sure! But it's the same kind of atrophy we saw when, e.g. transitioning from paper maps to digital ones, or from memorizing phone numbers to handing out email addresses. We apply the neurons we save by no longer learning paper map navigation and such to other domains of life.

The process has been ongoing since homo erectus figured out that if you bang a rock hard enough, you get a knife. So what?

AnimalMuppet 5 days ago | parent [-]

So what is, the skill in question is thinking critically. Letting that atrophy is kind of a bigger deal than if our paper map reading skills atrophy.

Now, you could argue that, when we use AI, critical thinking skills are more important, because we have to check the output of a tool that is quite prone to error. But in actual use, many people won't do that. We'll be back at "Computers Do Not Lie" (look for the song on Youtube if you're not familiar with it), only with a much higher error rate.

flanked-evergl 5 days ago | parent | prev | next [-]

VS Code copilot has reprogrammed my mind to the point where not using it is just not worth it. It actually seldomly helps me do difficult things, it often helps me do incredibly mundane things, and if I have to go back to doing those incredibly mundane things by hand I would rather become a gardener.

AnimalMuppet 5 days ago | parent | prev | next [-]

> "Studies" like this bite at the ankles of every change in information technology. Victorians thought women reading too many magazines would rot their minds.

If the Victorians had scientific studies showing that, you might have a point. Instead, you just have a flawed analogy.

And, why the scare quotes? If you can point to some actual flaws in the study, do so. If not, you're just dismissing a study that you don't agree with, but you have no actual basis for doing so. Whereas the study does give us a basis for accepting its conclusions.

quotemstr 5 days ago | parent [-]

> And, why the scare quotes?

N=54, students and academics only (mostly undergrad), impossible to blind, and, worst of all, the conclusion of the study supports a certain kind of anti-technology moralizing want to do anyway. I'd be shocked if it replicated, and even if it did, it wouldn't mean much concretely.

You could run the same experiment comparing paper maps versus Google Maps in a simulated navigation scenario. I'd bet the paper map group would score higher on various comprehension metrics. So what? Does that make digital maps bad for us? That's the implication of the article, and I don't think the inference is warranted.

ath3nd 5 days ago | parent | prev [-]

If it wasn't for studies like this, you'd still think arsenic is a great way to produce a vibrant green color to paint your house in the color of nature!

Because of studies like this we know the burning of fossil fuels is a dead-end for us and our climate, and due to that have developed alternative methods of generating energy.

And the study actually proved that LLM usage reprograms your brain and makes you a dumbass. Social media usage does as well, those two things are not exclusive, if anything, their effects compound on an already pretty dumb and gullible population. So if your argumemt is 'but what about reddit', thats a non argument called 'whataboutism'. Look it up and hopefully it might give you a hint why you are getting downvoted.

There have been three recent studies showing that:

- 1. 95% LLM projects fail in the enterprise https://fortune.com/2025/08/18/mit-report-95-percent-generat...

- 2. Experienced developers get 19% less productive when using an LLM https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...

- 3. LLM usage makes you dumber https://publichealthpolicyjournal.com/mit-study-finds-artifi...

We reached a stage where people on the internet mistake their opinion on a subject to be as relevant as a study on the subject.

If you don't have another study or haven't done the science to disprove this study, how come you dismiss so easily a study that actually took time, data and the scientific method to reach to a conclusion? I feel we gotta actively and firmly call out that kind of behavior and ridicule it.

planetmcd 5 days ago | parent [-]

1. Wait, in a category where the general failure rate is traditionally 75%, using a bleeding edge technology adds 20% more risk, what a shock. 2. This is an interesting study, but perhaps limited. Draws conclusions from a set of 16 developers on very large projects, many of whom did not have previous experience with the editor used in the study or LLMs in general. The study did conclude it added time in these cases. There is a reason for the large sense of value, this would be the thing of note to uncover based on these results. Study notes 79% continued to use the AI tools. Speed is not the only value to be gained, but it was the only value measured. (Study notes this.) 3. Author didn't read or used AI to poorly summarize the poorly thought out study it is based on. Also, it seems you didn't read the study.