▲ | LocalPCGuy 5 days ago | |||||||||||||||||||||||||
This is a bad and sloppy regurgitation of a previous (and more original) source[1] and the headline and article explicitly ignore the paper authors' plea[2] to avoid using the paper to try to draw the exact conclusions this article saying the paper draws. The comments (some, not all) are also a great example of how cognitive bias can cause folks to accept information without doing a lot of due diligence into the actual source material. > Is it safe to say that LLMs are, in essence, making us "dumber"? > No! Please do not use the words like “stupid”, “dumb”, “brain rot”, "harm", "damage", "passivity", "trimming" and so on. It does a huge disservice to this work, as we did not use this vocabulary in the paper, especially if you are a journalist reporting on it > Additional vocabulary to avoid using when talking about the paper > In addition to the vocabulary from Question 1 in this FAQ - please avoid using "brain scans", "LLMs make you stop thinking", "impact negatively", "brain damage", "terrifying findings". | ||||||||||||||||||||||||||
▲ | causal 5 days ago | parent | next [-] | |||||||||||||||||||||||||
Yeah I feel like HN is being Reddit-ified with the amount of reposted clickbait that keeps making the front page :( This study in particular has made the rounds several times as you said. The study measures impact of 18 people using ChatGPT just four times over four months. I'm sorry but there is no way that is controlling for noise. I'm sympathetic to the idea that overusing AI causes atrophy but this is just clickbait for a topic we love to hate. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
▲ | NapGod 5 days ago | parent | prev | next [-] | |||||||||||||||||||||||||
yea it's clear no one is actually reading the paper. the study showed the group who used LLMs for the first three sessions then had to do session 4 without them had lower brain connectivity than was recorded for session 3 with all the groups showing some kind of increase from one session to the next. Importantly, this group's brain connectivity didn't reset to the session 1 levels, but somewhere in-between. They were still learning and getting better at the essay writing task. In session 4 they effectively had part of the brain network they were using for the task taken away, so obviously there's a dip in performance. None of this says anyone got dumber. The philosophical concept of the Extended Mind is key here. imo the most interesting result is that the brains of the group that had done sessions 1-3 without the search engine or LLM aids lit up like christmas trees in session 4 when they were given LLMs to use, and that's what the paper's conclusions really focus on. | ||||||||||||||||||||||||||
▲ | marcofloriano 5 days ago | parent | prev [-] | |||||||||||||||||||||||||
> Is it safe to say that LLMs are, in essence, making us "dumber"? > No! Please do not use the words like “stupid”, “dumb”, “brain rot”, "harm", "damage", "passivity", "trimming" and so on. It does a huge disservice to this work, as we did not use this vocabulary in the paper, especially if you are a journalist reporting on it Maybe it's not safe so far, but it has been my experience using chatGPT for eight months to code. My brain is getting slower and slower, and that study makes a hell of a sense to me. And i don't think that we will see new studies on this subject, because those in lead of society as a whole don't want negative press towards AI. | ||||||||||||||||||||||||||
|