Remix.run Logo
sdoering 4 hours ago

This reminds me of the recurring pattern with every new medium: Socrates worried writing would destroy memory, Gutenberg's critics feared for contemplation, novels were "brain softening," TV was the "idiot box." That said, I'm not sure "they've always been wrong before" proves they're wrong now.

Where I'm skeptical of this study:

- 54 participants, only 18 in the critical 4th session

- 4 months is barely enough time to adapt to a fundamentally new tool

- "Reduced brain connectivity" is framed as bad - but couldn't efficient resource allocation also be a feature, not a bug?

- Essay writing is one specific task; extrapolating to "cognition in general" seems like a stretch

Where the study might have a point:

Previous tools outsourced partial processes - calculators do arithmetic, Google stores facts. LLMs can potentially take over the entire cognitive process from thinking to formulating. That's qualitatively different.

So am I ideologically inclined to dismiss this? Maybe. But I also think the honest answer is: we don't know yet. The historical pattern suggests cognitive abilities shift rather than disappear. Whether this shift is net positive or negative - ask me again in 20 years.

[Edit]: Formatting

wisty 2 hours ago | parent | next [-]

Soapbox time.

They were arguably right. Pre literate peole could memorise vast texts (Homer's work, Australian Aboriginal songlines). Pre Gutenberg, memorising reasonably large texts was common. See, e.g. the book Memory Craft.

We're becoming increasingly like the Wall E people, too lazy and stupid to do anything without our machines doing it for us, as we offload increasing amounts onto them.

And it's not even that machines are always better, they only have to be barely competent. People will risk their life in a horribly janky self driving car if it means they can swipe on social media instead of watching the road - acceptance doesn't mean it's good.

We have about 30 years of the internet being widely adopted, which I think is roughly similar to AI in many ways (both give you access to data very quickly). Economists suggest we are in many ways no more productive now than when Homer Simpson could buy a house and raise a family on a single income - https://en.wikipedia.org/wiki/Productivity_paradox

Yes, it's too early to be sure, but the internet, Google and Wikipedia arguably haven't made the world any better (overall).

CuriouslyC an hour ago | parent | next [-]

Brains are adaptive. We're not getting dumber, we're just adapting to a new environment. Just because they're less fit for other environments doesn't make it worse.

As for the productivity paradox, this discounts the reality that we wouldn't even be able to scale the institutions we're scaling without the tech. Whether that scaling is a good thing is debatable.

discreteevent an hour ago | parent | next [-]

> Brains are adaptive.

They are, but you go on to assume that they will adapt in a good way.

Bodies are adaptive too. That didn't work out well for a lot of people when their environment changed to be sedentary.

doublerabbit an hour ago | parent | prev [-]

Brains are adaptive and as we adapt we are turning more cognitive unbalanced. We're absorbing potentially bias information at a faster rate. GPT can give you information of X in seconds. Have you thought about it? Is that information correct? Information can easily be adapted to sound real while masking the real as false.

Launching a search engine and searching may spew incorrectness but it made you make judgement, think. You could have two different opinions one underneath each other; you saw both sides of the coin.

We are no longer critical thinking. We are taking information at face value, marking it as correct and not questioning is it afterwards.

The ability to evaluate critically and rationally is what's decaying. Who opens an physical encyclopedia nowadays? That itself requires resources, effort and time. Life's level of complexity doesn't help making it is easier to consume that information given to us is true. The Wall-E view isn't wrong.

CuriouslyC 28 minutes ago | parent [-]

I see a lot of people grinding and hustling in a way that would have crushed people 75 years ago. I don't think our lack of desire to crack an encyclopedia for a fact rather than rely on AI to serve up a probably right answer is down to laziness, we just have bigger fish to fry.

doublerabbit 24 minutes ago | parent [-]

Valid point, amended my viewpoint to cater to that, thanks.

UltraSane an hour ago | parent | prev [-]

Instead of memorizing vasts amount of text modern people memorize the plots of vast amounts of books, moves, TV shows, and video games and pop culture.

Computers are much better at remembering text.

mschild 4 hours ago | parent | prev | next [-]

Needs more research. Fully agree on that.

That said:

TV very much is the idiot box. Not necessarily because of the TV itself but rather whats being viewed. An actual engaging and interesting show/movie is good, but last time I checked, it was mostly filled with low quality trash and constant news bombardment.

Calculators do do arithmetic and if you ask me to do the kind of calculations I had to do in high school by hand today I wouldnt be able to. Simple calculations I do in my head but my ability to do more complex ones diminished. Thats down to me not doing them as often yes, but also because for complex ones I simply whip out my phone.

richrichardsson 3 hours ago | parent [-]

> Calculators do do arithmetic and if you ask me to do the kind of calculations I had to do in high school by hand today I wouldnt be able to

I got scared by how awfully my juniour (middle? 5-11) school mathematics had slipped when helping my 9 year old boy with his homework yesterday.

I literally couldn't remember how to carry the 1 when doing subtractions of 3 digit numbers! Felt literally idiotic having to ask an LLM for help. :(

wiz21c 3 hours ago | parent [-]

On my part, I don't use that carry method at ll. When I have to substract, I substract by chunks that my brain can easily subtract. For example 1233 - 718, I'll do 1233 - 700 = 533 then 533 - 20 = 513 then 513 + 2 = 515. It's completely instinctive (and thus I can't explain to my children :-) )

What I have asked my children to do very often is back-of-the-envelope multiplications and other computations. That really helped them to get a sense of the magnitude of things.

zeroonetwothree a few seconds ago | parent | next [-]

This doesn’t scale to larger numbers though. I do that too for smaller subtractions but if I need to calculate some 9 digit computation then I would use the standard pen and paper tabular method with borrowing (not that it comes up in practice).

n4r9 an hour ago | parent | prev [-]

I have a two year old and often worry that I'll teach him some intuitive arithmetic technique, then school will later force a different method and mark him down despite getting the right answer. What if it ends up making him hate school, maths, or both?

__s 12 minutes ago | parent [-]

I experienced this. Only made me hate school, but maybe because I had game programming at home to appreciate math with

Just expose them to everyday math so they aren't one of those people who think math has no practical uses. My father isn't great with math, but would raise questions like how wide a river was (solvable from one side with trig, using 30 degree angles for easy math). Napkin math makes things much more fun than strict classroom math with one right answer

kace91 3 hours ago | parent | prev | next [-]

I think novels and tv are bad examples, as they are not substituting a process. The writing one is better.

Here’s the key difference for me: AI does not currently replace full expertise. In contrast, there is not a “higher level of storage” that books can’t handle and only a human memory can.

I need a senior to handle AI with assurances. I get seniors by having juniors execute supervised lower risk, more mechanical tasks for years. In a world where AI does that, I get no seniors.

duskdozer 4 hours ago | parent | prev | next [-]

Not sure "they've always been wrong before" applies to TV being the idiot box and everything after

boesboes 3 hours ago | parent | prev | next [-]

I think that is a VERY false comparison. As you say, LLMs try to take over entire cognitive and creative processes and that is a bigger problem then outsourcing arithmetic

cimi_ 4 hours ago | parent | prev | next [-]

> The historical pattern suggests cognitive abilities shift rather than disappear.

Shift to what? This? https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...

darkwater 2 hours ago | parent [-]

What the hell have I just read (or at least skimmed)?? I cannot understand if the author is:

a) serious, but we live on different planets

b) serious with the idea, tongue-in-check in the style and using a lot of self-irony

c) an ironic piece with some real idea

d) he is mocking AI maximalists

cimi_ an hour ago | parent | next [-]

There was discussion about this here a couple of weeks ago: https://news.ycombinator.com/item?id=46458936

Steve Yegge's a famous developer, this is not a joke :) You could say he is an AI maximalist, from your options I'd go with (b) serious with the idea, tongue-in-check in the style and using a lot of self-irony.

It is exaggerated, but this is how he sees things ending up eventually. This is real software.

If things do end up in glorified kanban boards, what does it mean for us? That we can work less and use the spare time reading and doing yoga, or that we'll work the same hours with our attention even more fragmented and with no control over the outputs of these things (=> stress).

I'd really wish that people who think this is good for us and are pushing for this future do a bit better than:

1. More AI 2. ??? 3. Profit

cap11235 2 hours ago | parent | prev [-]

Just ignore the rambling crypto shill.

ben_w 4 hours ago | parent | prev | next [-]

> 4 months is barely enough time to adapt to a fundamentally new tool

Yes, but also the extra wrinkle that this whole thing is moving so fast that 4 months old is borderline obsolete. Same into the future, any study starting now based on the state of the art on 22/01/2026 will involve models and potentially workflows already obsolete by 22/05/2026.

We probably can't ever adapt fully when the entire landscape is changing like that.

> Previous tools outsourced partial processes - calculators do arithmetic, Google stores facts. LLMs can potentially take over the entire cognitive process from thinking to formulating. That's qualitatively different.

Yes, but also consider that this is true of any team: All managers hire people to outsource some entire cognitive process, letting themselves focus on their own personal comparative advantage.

The book "The Last Man Who Knew Everything" is about Thomas Young, who died in 1829; since then, the sum of recorded knowledge has broadened too much for any single person to learn it all, so we need specialists, including specialists in managing other specialists.

AI is a complement to our own minds with both sides of this: Unlike us, AI can "learn it all", just not very well compared to humans. If any of us had a sci-fi/fantasy time loop/pause that let us survive long enough to read the entire internet, we'd be much more competent than any of these models, but we don't, and the AI runs on hardware which allows it to.

For the moment, it's still useful to have management skills (and to know about and use Popperian falsification rather than verification) so that we can discover and compensate for the weaknesses of the AI.

wartywhoa23 3 hours ago | parent | prev | next [-]

> TV was the "idiot box."

TV is the uber idiot box, the overlord of the army of portable smart idiot boxes.

chairmansteve 3 hours ago | parent | prev | next [-]

"Socrates worried writing would destroy memory".

He may have been right... Maybe our minds work in a different way now.

vladms 4 hours ago | parent | prev | next [-]

> That said, I'm not sure "they've always been wrong before" proves they're wrong now.

I think a better framing would be "abusing (using it too much or for everything) any new tool/medium can lead to negative effects". It is hard to clearly define what is abuse, so further research is required, but I think it is a healthy approach to accept there are downsides in certain cases (that applies for everything probably).

lr4444lr 2 hours ago | parent | prev | next [-]

Were any of the prior fears totally wrong?

BlackFly 3 hours ago | parent | prev | next [-]

If you realize that what we remember are the extremized strawman versions of the complaints then you can realize that they were not wrong.

Writing did eliminate the need for memorization. How many people could quote a poem today? When oral history was predominant, it was necessary in each tribe for someone to learn the stories. We have much less of that today. Writing preserves accuracy much more (up to conquerors burning down libraries, whereas it would have taken genocide before), but to hear a person stand up and quote Desiderata from memory is a touching experience to the human condition.

Scribes took over that act of memorization. Copying something lends itself to memorization. If you have ever volunteered extensively for project Gutenberg you can also witness a similar experience: reading for typos solidifies the story into your mind in a way that casual writing doesn't. In losing scribes we lost prioritization of texts and this class of person with intimate knowledge of important historical works. With the addition of copyright we have even lost some texts. We gained the higher availability of works and lower marginal costs. The lower marginal costs led to...

Pulp fiction. I think very few people (but I would be disappointed if it was no one) would argue that Dan Brown's da Vinci Code is on the same level as War and Peace. From here magazines were created, even cheaper paper, rags some would call them (or use that to refer to tabloids). Of course this also enabled newspapers to flourish. People started to read things for entertainment, text lost its solemnity. The importance of written word diminished on average as the words being printed became more banal.

TV and the internet led to the destruction of printed news, and so on. This is already a wall of text so I won't continue, but you can see how it goes:

Technology is a double edged sword, we may gain something but we also can and did lose some things. Whether it was progress or not is generally a normative question that often a majority agrees with in one sense or another but there are generational differences in those norms.

In the same way that overuse of a calculator leads to atrophy of arithmetic skills, overuse of a car leads to atrophy of walking muscles, why wouldn't overuse of a tool to write essays for you lead to atrophy of your ability to write an essay? The real reason to doubt the study is because its conclusion seems so obvious that it may be too easy for some to believe and hide poor statistical power or p-hacking.

darkwater 2 hours ago | parent [-]

I think your take is almost irrefutable, unless you frame human history as the only possible way to achieve current humanity status and (unevenly distributed) quality of life.

I also find exhausting the Socrates reference that's ALWAYS brought up in these discussions. It is not the same. Losing the collective ability to recite a 10000 words poem by heart because of books it's not the same thing as stopping to think because an AI is doing the thinking for you.

We keep adding automation layers on top of the previous ones. The end goal would be _thinking_ of something and have it materialized in computer and physical form. That would be the extreme. Would people keep comparing it to Socrates?

direwolf20 3 hours ago | parent | prev | next [-]

How do we know they were wrong before?

piyuv 3 hours ago | parent | prev | next [-]

None of the examples you provided were being sold as “intelligence”

bowsamic 4 hours ago | parent | prev | next [-]

> they've always been wrong before

Were they? It seems that often the fears came true, even Socrates’

TheOtherHobbes 3 hours ago | parent [-]

Writing didn't destroy memory, it externalised it and made it stable and shareable. That was absolutely transformative, and far more useful than being able to re-improvise a once-upon-a-time heroic poem from memory.

It hugely enhanced synthetic and contextual memory, which was a huge development.

AI has the potential to do something similar for cognition. It's not very good at it yet, but externalised cognition has the potential to be transformative in ways we can't imagine - in the same way Socrates couldn't imagine Hacker News.

Of course we identify with cognition in a way we didn't do with rote memory. But we should possibly identify more with synthetic and creative cognition - in the sense of exploring interesting problem spaces of all kinds - than with "I need code to..."

Akronymus an hour ago | parent | next [-]

> AI has the potential to do something similar for cognition. It's not very good at it yet, but externalised cognition has the potential to be transformative in ways we can't imagine - in the same way Socrates couldn't imagine Hacker News.

Wouldnt the endgame of externalized cognition be that humans essentially become cogs in the machine?

latexr an hour ago | parent | prev [-]

> in the same way Socrates couldn't imagine Hacker News.

Perhaps he could. If there’s an argument to be made against writing, social media (including HN) is a valid one.

raincole 2 hours ago | parent | prev [-]

> "they've always been wrong before"

In my opinion, they've almost always been right.

In the past two decades, we've seen the less-tech-savvy middle managers who devalued anything done on computer. They seemed to believe that doing graphic design or digital painting was just pressing a few buttons on the keyboard and the computer would do the job for you. These people were constantly mocked among online communities.

In programmers' world, you have seen people who said "how hard it could be? It's just adding a new button/changing the font/whatever..."

And strangely, in the end those tech muggles were the insightful ones.