▲ | marcofloriano 5 days ago | |
> Is it safe to say that LLMs are, in essence, making us "dumber"? > No! Please do not use the words like “stupid”, “dumb”, “brain rot”, "harm", "damage", "passivity", "trimming" and so on. It does a huge disservice to this work, as we did not use this vocabulary in the paper, especially if you are a journalist reporting on it Maybe it's not safe so far, but it has been my experience using chatGPT for eight months to code. My brain is getting slower and slower, and that study makes a hell of a sense to me. And i don't think that we will see new studies on this subject, because those in lead of society as a whole don't want negative press towards AI. | ||
▲ | LocalPCGuy 5 days ago | parent [-] | |
You are referencing your own personal experience, and while that is an entirely valid opinion for you to have personally about your usage, it's not possible to extrapolate that across an entire population of people. Whether or not you're doing that, part of the point I was making was how people who "think it makes sense" will often then not critically analyze something because it already agrees with their preconceived notion. Super common, I'm just calling it out cause we can all do better. All we can say right now is "we don't really know how it affects our brains", and we won't until we get some studies (which is what the underlying paper was calling for, more research). Personally I do think we'll get more studies, but the quality is the question for me - it's really hard to do a study right when by the time it's done, there's been 2 new generations of LLMs released making the study data potentially obsolete. So researchers are going to be tempted to go faster, use less people, be less rigid overall, which in turn may make for bad results. |