Remix.run Logo
LocalPCGuy 5 days ago

This is a bad and sloppy regurgitation of a previous (and more original) source[1] and the headline and article explicitly ignore the paper authors' plea[2] to avoid using the paper to try to draw the exact conclusions this article saying the paper draws.

The comments (some, not all) are also a great example of how cognitive bias can cause folks to accept information without doing a lot of due diligence into the actual source material.

> Is it safe to say that LLMs are, in essence, making us "dumber"?

> No! Please do not use the words like “stupid”, “dumb”, “brain rot”, "harm", "damage", "passivity", "trimming" and so on. It does a huge disservice to this work, as we did not use this vocabulary in the paper, especially if you are a journalist reporting on it

> Additional vocabulary to avoid using when talking about the paper

> In addition to the vocabulary from Question 1 in this FAQ - please avoid using "brain scans", "LLMs make you stop thinking", "impact negatively", "brain damage", "terrifying findings".

1. https://www.brainonllm.com/

2. https://www.brainonllm.com/faq

causal 5 days ago | parent | next [-]

Yeah I feel like HN is being Reddit-ified with the amount of reposted clickbait that keeps making the front page :(

This study in particular has made the rounds several times as you said. The study measures impact of 18 people using ChatGPT just four times over four months. I'm sorry but there is no way that is controlling for noise.

I'm sympathetic to the idea that overusing AI causes atrophy but this is just clickbait for a topic we love to hate.

Mentlo 4 days ago | parent | next [-]

Ironically you’re now replicating the reddified response to this paper by attacking the sample size.

The sample size is fine. It’s small, yes, but normal for psychological research which is hard to do at scale.

And the difference between groups is so large that the noise would have to be at unheard levels to taint the finding.

LocalPCGuy 5 days ago | parent | prev | next [-]

Yup, I even found myself a bit hopeful that maybe it was a follow-up or new study and we'd get either more or at least different information. But that bit of hope is also an example of my bias/sympathy to that idea that it might be harmful.

It should be ok to just say "we don't know yet, we're looking into that", but that isn't the world we live in.

tarsinge 5 days ago | parent | prev | next [-]

Ironically there should be another study of how not using AI is also leading to cognitive decline on Reddit. On programming subreddits people have lost all sense of engineering and have simply become religious about being against a tool.

GeoAtreides 5 days ago | parent | prev [-]

>I feel like HN is being Reddit-ified

It's september and september never ends

NapGod 5 days ago | parent | prev | next [-]

yea it's clear no one is actually reading the paper. the study showed the group who used LLMs for the first three sessions then had to do session 4 without them had lower brain connectivity than was recorded for session 3 with all the groups showing some kind of increase from one session to the next. Importantly, this group's brain connectivity didn't reset to the session 1 levels, but somewhere in-between. They were still learning and getting better at the essay writing task. In session 4 they effectively had part of the brain network they were using for the task taken away, so obviously there's a dip in performance. None of this says anyone got dumber. The philosophical concept of the Extended Mind is key here.

imo the most interesting result is that the brains of the group that had done sessions 1-3 without the search engine or LLM aids lit up like christmas trees in session 4 when they were given LLMs to use, and that's what the paper's conclusions really focus on.

marcofloriano 5 days ago | parent | prev [-]

> Is it safe to say that LLMs are, in essence, making us "dumber"?

> No! Please do not use the words like “stupid”, “dumb”, “brain rot”, "harm", "damage", "passivity", "trimming" and so on. It does a huge disservice to this work, as we did not use this vocabulary in the paper, especially if you are a journalist reporting on it

Maybe it's not safe so far, but it has been my experience using chatGPT for eight months to code. My brain is getting slower and slower, and that study makes a hell of a sense to me.

And i don't think that we will see new studies on this subject, because those in lead of society as a whole don't want negative press towards AI.

LocalPCGuy 5 days ago | parent [-]

You are referencing your own personal experience, and while that is an entirely valid opinion for you to have personally about your usage, it's not possible to extrapolate that across an entire population of people. Whether or not you're doing that, part of the point I was making was how people who "think it makes sense" will often then not critically analyze something because it already agrees with their preconceived notion. Super common, I'm just calling it out cause we can all do better.

All we can say right now is "we don't really know how it affects our brains", and we won't until we get some studies (which is what the underlying paper was calling for, more research).

Personally I do think we'll get more studies, but the quality is the question for me - it's really hard to do a study right when by the time it's done, there's been 2 new generations of LLMs released making the study data potentially obsolete. So researchers are going to be tempted to go faster, use less people, be less rigid overall, which in turn may make for bad results.