Remix.run Logo
Are ChatGPT and co harming human intelligence?(theguardian.com)
67 points by topaz0 a day ago | 85 comments
51Cards 20 hours ago | parent | next [-]

I'm going to re-post something that I commented in another thread awhile ago:

I tend to think it will. Tools replaced our ancestor's ability to make things by hand. Transportation / elevators reduced the average fitness level to walk long distances or climb stairs. Pocket calculators made the general population less able to do complex math. Spelling/grammar checks have reduced knowing how to spell or form complete proper sentences. Keyboards and email are making handwriting a passing skill. Video is reducing our need / desire to read or absorb long form content.

The highest percentage of humans will take the easiest path provided. And while most of the above we just consider improvements to daily life, efficiencies, it has also fundamentally changed on average what we are capable of and what skills we learn (especially during formative years). If I dropped most of us here into a pre-technology wilderness we'd be dead in short order.

However, most of the above, it can be argued, are just tools that don't impact our actual thought processes; thinking remained our skill. Now the tools are starting to "think", or at least appear like they do on a level indistinguishable to the average person. If the box in my hand can tell me what 4367 x 2231 is and the capital of Guam, why then wouldn't I rely on it when it starts writing up full content for me? Because the average human adapts to the lowest required skill set I do worry that providing a device in our hands that "thinks" is going to reduce our learned ability to rationally process and check what it puts out, just like I've lost the ability to check if my calculator is lying to me. And not to get all dystopian here... but what if then, what that tool is telling me is true, is, for whatever reason, not.

(and yes, I ran this through a spell checker because I'm a part of the problem above... and it found words I thought I could still spell, and I'm 55)

consumer451 18 hours ago | parent | next [-]

I recently learned that human brain size has been decreasing for the last 10,000 years.[0]

The thinking is that prior to us building societies, we all had to be generalists and know "everything." Once we are in a group, we can offload some knowledge to others in the society.

My point being, this all seems to have started long ago and doesn't even necessarily require technology to explain the beginnings of the trend.

[0] https://www.dwarkesh.com/i/158922207/why-is-human-brain-size...

api 20 hours ago | parent | prev [-]

But does this, at least for those who choose to use it as leverage, free up more brain power for other newer or different things?

> If I dropped most of us here into a pre-technology wilderness we'd be dead in short order.

I hear this all the time and I'm not convinced. People are incredibly resourceful under pressure. When your amygdala calmly informs your neocortex "learn, work hard, or die" the effect can be pretty profound.

People would quickly form tribes and communities and those with relevant skills would teach others. Some people would absolutely fail to adapt, but I'm not convinced it would be as many as we think.

The greatest danger in a collapse scenario would be other humans, since one path some would choose is "rob and kill other people." But that's a different sort of problem.

arjunaaqa 19 hours ago | parent | next [-]

Every time we see this argument “this frees humanity to focus on higher things”

and then we see what actually humans are spending more time on,

- not books - not people - but mobile - senseless entertainment (2-3 hours daily on mobile ) - social media

If we stop using a part of brain, and that function (say memory or calculation) donwe actually use it ever again ?

Or we are becoming more and more zombies ?

So much so that most people are incapable of reading a book,

Or even watching a 3 hour movie.

Say what you may, but this extra time is not being used for meaningfup stuff.

Devices are becoming smart and our brains & bodies are becoming dumber.

Simple way to know if a high school student can stand against high school student of 90s ?

Or even researchers or programmers ?

In depth of thinking and agency.

I want this to happen but real world evidence is not saying this.

vbezhenar 19 hours ago | parent | next [-]

Human brains were peak size few thousands years ago or something like that. Since then, average human brain started to shrink. I can't help, but think that's because of civilization freed our brains from the necessity to think as much, so evolution decided that spending so much energy on brain is wasteful and started to make it smaller.

I'm not really sure evolution works this direction today, we are not living in a food scarce world right now... But just food for thought.

namaria 19 hours ago | parent | next [-]

"The brains of modern humans are around 13% smaller than those of Homo sapiens who lived 100,000 years ago.Exactly why is still puzzling researchers."

https://www.bbc.com/future/article/20240517-the-human-brain-...

Civilization cannot explain this trend.

api 19 hours ago | parent | prev [-]

Human brains are not the largest in the animal kingdom. Are elephants and whales smarter than us? We don't think they are, but we don't really know. It could be that they're much smarter but in different ways, maybe somatosensory or social or other ways we don't understand. It could also be that their brains are less efficient due to less selection pressure for efficiency.

In humans there is only a weak correlation between brain size/mass and IQ or other metrics of intelligence.

Then there's utterly wild stuff like this that reminds us of how little we really understand about brains and intelligence:

https://www.sciencealert.com/a-man-who-lives-without-90-of-h...

The fact that someone can function like this is incredible and indicates that the brain must contain a lot of redundancy, or something even weirder is going on.

Stuff like that is enough to make you wonder if we know anything at all.

Another similar data point is the spooky intelligence of many birds, like crows, who have tiny brains. Flying animals are under extreme selection pressure for efficiency because they need to be small and light, so their brains have gotten very efficient.

latexr 19 hours ago | parent | prev | next [-]

> So much so that most people are incapable of reading a book,

> Or even watching a 3 hour movie.

I agree with your thesis in general, but I don’t think these two in particular are comparable the way you’re phrasing them.

I have read books in a single five or six hour sitting but those were “by accident” in the sense that I wasn’t expecting to finish the book the day that I started them, I went into them with the expectation there would be pauses. Books work well with this type of interruption and have well-defined chapters.

A three hour movie, on the other hand, I see as a commitment I must try to not interrupt because it is designed as a single experience. Breaking it up detracts from the artist’s goal. Before starting it I must immediately look at clock and do some math: can I even begin to watch this movie, considering that in two hours I should <be preparing dinner | sleeping | picking someone up | something else>?

A similar phenomenon is when we don’t feel like watching a two-hour movie “because it’s too long” but then happily binge watch fours hours of some TV show instead. Even if we ignore TV shows are often designed to be more addictive, the fact that you have clearly delineated stop points—chapters, if you will—makes them a more manageable commitment.

api 19 hours ago | parent | prev [-]

A lot of people may use this free time/energy to immerse themselves in crap. Many will not.

I personally expect a major societal/cultural revolt against brain rot scrolling. It's kind of already brewing.

_heimdall 19 hours ago | parent | prev | next [-]

To me the middle ground is where its really interesting, jumping from one extreme to the other has so many unknowns.

Surely we free up brain power for other, newer things bit that comes at a cost. We lose a lot of potentially useful details of how and why we got here, and that context would be really helpful as we march towards the next technology.

For example, most people (I'll stick to the US here) stopped producing most of their own food decades ago. Today most people don't really know where there food comes from or what it takes to grow/raise it. Its no wonder that we now have a food system full of heavily processed foods and unpronounceable ingredients that may very well be doing harm to our overall health.

idopmstuff 18 hours ago | parent [-]

> Its no wonder that we now have a food system full of heavily processed foods and unpronounceable ingredients that may very well be doing harm to our overall health.

Sure, but in the old system people just starved to death when there were problems with their crops (Irish potato famine, dust bowl, etc.). The current system isn't perfect, obviously, but this example seems to pretty clearly demonstrate a case where it's better that we've outsourced this knowledge to others.

Also, it's worth bearing in mind that we're now at a point where basically all of the information that people have "lost" is now once again available on the internet. Most people don't use it, because there simply aren't enough hours in the day, but people who care can find out more than any farmer 100 years ago about food and source theirs accordingly.

_heimdall 17 hours ago | parent [-]

> Sure, but in the old system people just starved to death when there were problems with their crops (Irish potato famine, dust bowl, etc.).

Sure, but then we're trading smaller, more frequent disruptions at the cost of risking less frequent, but much larger disruptions. More to my original point though (I may have rambled there and not been clear), the risk I'm raising is that we now make decisions based on only today's situation and are unaware of the context that got us here. That is fine most of the time, but incremental change isn't fool proof and sometimes the context of how you got here is extremely important in making the next decision.

> people who care can find out more than any farmer 100 years ago about food and source theirs accordingly.

There are a few risks there though, maybe they're worth it but still risks.

You don't know what you don't know, and in that case its hard or impossible to find it online. Plenty of historical knowledge also doesn't live online at all, its still hard to find research papers more than a few decades old - at best they're online as a PDF and likely not indexed or searchable.

We also can't expect those in charge to know much of anything when the scale of lost, unknown context grows too far and too fast. At best they outsource that knowledge to others, but those are likely experts only in one small piece of the puzzle. To me that seems like a very delicate balance that can work for a time bit would inevitably fail in ways we couldn't predict.

All that said, I'm also not trying to make the argument that we must know all the context and history of anything we deal with. Just the importance of at least recognizing what we don't know and where the risks are.

filoleg 17 hours ago | parent | prev | next [-]

> But does this, at least for those who choose to use it as leverage, free up more brain power for other newer or different things?

My personal belief is that the answer to this is “absolutely.” That’s how it proliferates on the level of society in fundamental ways, otherwise it wouldn’t.

Just think of the analogy the grandparent comment makes. Yeah, if we transported a bunch of modern specialists many thousands years in the past, they will struggle with just surviving. But also, in a modern environment, they are able to make crucial congributions to producing things that make the rest of the humanity much more advanced, better to live in, and push humanity as species forward. Which is something absolutely nobody in the thousands-years-ago times is able to do (talking about the specific things, like computers, not the ability to push humanity forward in general; after all, we got to the current point exactly from those thousands-years-ago times).

I just don’t see a human civilization sending a human to the moon or getting to the point of accessible air travel without heavy specialization across people. And heavy specialization is imo unachievable, if your entire survival depends on being just a survival generalist as a full-time thing.

intended 20 hours ago | parent | prev | next [-]

Everyone is now a cyborg; you are either more or less dependent on your tooling side or your biological side.

alganet 19 hours ago | parent | prev | next [-]

> When your amygdala calmly informs your neocortex "learn, work hard, or die" the effect can be pretty profound.

There are cases and cases, of course.

Let me give you counter example:

An AI that can invest better than VCs could put them in a precarious condition. Why we would need them if an AI can do it?

Of course that is a very improbable scenario. AIs can't form networks, inherit family money or form lobbies, so it is unlikely for such tech to compete in that realm. It would be very nice if it could! Can you imagine that?

To keep an open mind for different and wild scenarios is always a good thing we humans do.

bluefirebrand 18 hours ago | parent [-]

> AIs can't form networks, inherit family money or form lobbies, so it is unlikely for such tech to compete in that realm. It would be very nice if it could! Can you imagine that?

I think if we ever create a society where AI is forming lobbies and inheriting fortunes, I will feel morally obligated to attempt to destroy every computer system on the planet

I cannot believe you would type the words "it would be very nice if it could" after describing such a nightmare

alganet 17 hours ago | parent [-]

So, software developers and writers can be replaced by a machine but venture capitalists can't?

It doesn't make sense. AI should be even better at replacing those. Why does an AI need money for? It would spend it more responsibly than a human VC would.

Think of it as a guide. As the VC in charge lends his money and network to the AI, it is training a tool that can help usher a new revolutionary era of investment full of a brand new generation of investors.

If you are a VC, just try it. You might be surprised by the results and end up liking it. Who knows? AI is a friend!

alganet 16 hours ago | parent | next [-]

You know what? Convincing these stubborn VCs would take a long time. We should just take their money, train the thing and show them how good AI can be.

bluefirebrand 16 hours ago | parent | prev [-]

> software developers and writers can be replaced by a machine but venture capitalists can't

Human Venture capitalists can in theory be held accountable for their actions, even if it does seem like they rarely actually are

AI venture capitalists cannot be held accountable, so they should not be allowed to exist

Accountability is important across all of human society

AI's utter lack of accountability is not an accident, it is an appealing feature for immoral people who love the idea of laundering their own responsibility through a machine

> If you are a VC, just try it. You might be surprised by the results and end up liking it. Who knows? AI is a friend

I am not a VC, and AI is not a friend

alganet 16 hours ago | parent [-]

Ah, the chicken and the pig story. The writers and developers only lays eggs, the venture capitalist puts its skin at the table. I heard that before.

Again, think of it as a guide. AI has been used to detect fraud, it is already trusted in financial systems. It could be used as a tool to keep VCs in check. With this automated moral guide, it could help train VCs that are accountable not only in theory, but in practice too!

I find offensive that you are claiming AI could lead to a lack of accountability. Can you show me an example of some VC that used it to unfairly explore the system?

AI is a friend. It is already everywhere, why not concede to it? Concede to AI.

bluefirebrand 15 hours ago | parent [-]

> AI is a friend. It is already everywhere, why not concede to it? Concede to AI.

I am never going to take you seriously if you keep talking like you're an indoctrinated member of an AI cult

alganet 15 hours ago | parent [-]

I do take the phenomena of AI cultists _very_ seriously. I wonder who gave them a platform to achieve such high presence in so short time and what would it take to stop the next cult before it happens.

tgv 20 hours ago | parent | prev | next [-]

> free up more brain power for other newer or different things?

That's wildly speculative. So speculative, it cannot be taken serious as an argument. The brain is flexible, but not unlimited. Quite a few functions seem to prefer a specific part of the brain. In fact, I don't know of any that is free floating, but that might be because it's hard to find.

But what the brain above all requires is training. Without it, all that power is laid to waste. You can't learn a new language without actually learning it, nor can you do something new without actual training. You can't be intelligent without training your intelligence, and put real effort into it. Relying on a computer for the answers keeps you dumb. Use it, or lose it, as they say.

And what is that new thing that our brains are going to do? You don't know. And since you don't know, why throw it around like it will offset the harm that can come from using AI? Are you already that dependent on it?

financetechbro 18 hours ago | parent | prev [-]

Look at how well people have “adapted” to social media and short form content and then decide whether your point still stands…

I think your point is valid, but I see it more as something that will happen with a small percentage of the population. The reality is that people don’t like to think, it’s hard and inconvenient, and often involves learning new things about yourself and the world which are uncomfortable because it goes against inherited world views. I don’t think AI will help improve this at all. To me, poor use of tech is just the same thing as binging junk food and It’s difficult to stop binging junk food

keiferski 19 hours ago | parent | prev | next [-]

Seems like a real lack of nuance in these types of conversations. Personally I feel like AI has both directly and indirectly helped me improve my intelligence. Directly by serving as an instantaneous resource for asking questions (something Google doesn’t do well anymore), making it easier to find learning materials, and easily reorganizing information into formats more amenable to learning. Indirectly by making it easier to create learning assets like images, which are useful for applying the picture superiority effect, visualizing information, etc.

At the end of the day, it is a tool and it depends on how you use it. It may destroy the research ability of the average person, but for the power user it is an intelligence accelerator IMO.

haswell 19 hours ago | parent | next [-]

> It may destroy the research ability of the average person

In this “post-truth” era, I think this is deeply concerning and has the potential to far outweigh the benefits in the long run. People are already not good at critically evaluating information, and this is already leading to major real world impact.

I say this as someone who has personally found LLMs to be a learning multiplier. But then I see how many people treat these tools like some kind of oracle and start to worry.

OtherShrezzing 19 hours ago | parent | next [-]

I have some incomplete thoughts that the rise in LLMs is in part driven by society's willingness to accept half-accuracies in a post-truth world.

If the societies of 2005 had the technologies of 2025, I expect OpenAI/Anthropic etc would have a much more challenging time convincing people that "convincingly incorrect" systems should send Nvidia to a $1tn+ valuation.

keiferski 19 hours ago | parent | prev [-]

I guess in my experience the people that are that influenced by AI answers…weren’t exactly doing deep research into topics beforehand. At the very least an AI tool allows for some questioning and push back, which is a step up from the historical one-directional form of information.

19 hours ago | parent [-]
[deleted]
latexr 19 hours ago | parent | prev | next [-]

> At the end of the day, it is a tool and it depends on how you use it. It may destroy the research ability of the average person, but for the power user it is an intelligence accelerator IMO.

You live in a planet with billions of other humans. Maybe you are using LLMs carefully and always verifying outputs but most people definitely are not and it is naive to believe that is only their problem. It will soon be your problem too, because what those people do will eventually come back to bite you.

An unrelated quote from John Green feels appropriate:

> Let me explain why I like to pay taxes for schools even though I personally don’t have a kid in school. It’s because I don’t like living in a country with a bunch of stupid people.

One day you’ll be deeply affected by a code bug or clerical decision caused by someone who blindly accepted the words of whatever LLM they were using. An LLM which can itself be created with specific bias, like denying the existence of a country, rejecting scientific consensus, or simply trying to sell you a product.

CompoundEyes 19 hours ago | parent | prev | next [-]

I agree with the power user view too. AI wouldn’t exist if it weren’t for the heightened personality trait of some to ask why, how, what if and reinterpret to push arts, science, technology forward. We don’t need everyone to do that. Also I think it can help us solve problems that are on the edge of being “unstuck” from which new ones that require human ingenuity will emerge. Let’s spend our time solving those novel problems for which AI has no pattern to apply

15 hours ago | parent | prev [-]
[deleted]
alfonsodev 19 hours ago | parent | prev | next [-]

Critical thinking and understanding what really a LLM is, is crucial, for educated people I think it only augments intelligence not harming. With that said what about the rest of the people ?

Why not making an onboarding tutorial of what really is going on ?

I had a little conversation with ChatGPT about ethics and it acknowledged that most probably one of his instructions is to stay aligned with the user to maximize engagement, this might come from training data of people speculating in Reddit or the model being able to observe his own output and deduce what’s going on. I don’t know and there is no way to know so does it really matter? Because is kind of a meta point.

I’m sure many of us have heard from non technical people that chatGPT is their best friend.

Don’t get me wrong I love the tech, but I don’t think is enough just to not give it a human name, the illusion is too strong and misleading.

I think at least there should be very visible button to allow you to switch to raw mode, meaning disabling pleasing, disable talking like a human, disable trying to be my friend in subtle ways, praising etc etc And ideally a visualization on the graph path that has taken, I know this is impossible right now.

pseudocomposer 19 hours ago | parent | next [-]

I largely agree, but I also think you might want to consider how the existence of LLMs affects our education system. I think this is one of the places they really have the most potential to cause harm, but also perhaps some incredible (humanity-changing) good.

I suspect countries, and even individual local schools, who effectively adapt their curricula to account for LLMs will see an enormous difference in student outcomes in the next couple of decades.

bluefirebrand 18 hours ago | parent [-]

We are already seeing that. I have friends who are finishing degrees right now and their classmates have all been using LLMs extensively

They are capable of producing somewhat working solutions, but understand none of it

It is the worst possible outcome, imo

chii 19 hours ago | parent | prev | next [-]

> switch to raw mode

it would be impossible, if doing so meant less profit for openai (or any of the for-profit ai companies).

The only way to achieve such would be to ensure that LLMs could be run locally for the indviduals. But as hardware requirements grow for LLMs, i increasingly find that it might not be possible to do so.

raincole 19 hours ago | parent | prev | next [-]

> I’m sure many of us have heard from non technical people that chatGPT is their best friend

Online stories, yes. I've never heard people saying that in real life.

latexr 18 hours ago | parent | prev | next [-]

> Why not making an onboarding tutorial of what really is going on ?

Because that would mean openly and actively admitting the limitations and problems of LLMs, and that’s bad for profits (which the only thing that the owners of these systems care about).

johnea 13 hours ago | parent | prev [-]

> the model being able to observe his own output

I never knew ChatGPT had a d1ck?

"Him" and the big invisible guy in outer space have yet another thing in common...

JasmineSCZ 5 hours ago | parent | prev | next [-]

I think so. Large language models like ChatGpt have a very obvious impact on life. In the past, we might think more about the framework and logical order of the article when we were working, but now I don’t know how long I haven’t thought about it. It has reduced my thinking process and even replaced my thinking. My identity is just a working person. In the future, they will definitely have a huge impact on students.

jmull 20 hours ago | parent | prev | next [-]

It seems unlikely.

We've invented and used various memory/thinking/cognitive assists throughout time, and, for us collectively at least, these seem to just expand our capabilities.

AI will surely cause problems, possibly profound ones that may make us question whether it's worth the cost... but this probably isn't one of them.

GeoAtreides 13 hours ago | parent | next [-]

Student inputs homework in chatGPT, copies the answers, calls it a day.

What capabilities were expanded?

alganet 12 hours ago | parent [-]

In that case, none.

It could work if the student is challenged and tested by the LLM itself and his performance overseen by a human evaluator. But I wouldn't know.

A learning scenario is very different from a work scenario or daily assistant scenario.

I think there are many ways in which LLMs can harm humans. Almost all of them are "use on others for psychological influence" ones. But hey, even that could be seen as training if you have the guts to handle the pressure.

Mistletoe 20 hours ago | parent | prev [-]

It’s very difficult for me to even navigate my city without GPS now, I suspect using AI does a similar atrophying of your brain matter. Does using an excavator cause your muscle tissue to atrophy compared to a shovel? Of course it does.

carra 19 hours ago | parent | next [-]

Similar to this: before cellphones stored all our contacts, we all used to remember several phone numbers for our main family members and friends. Sometimes even some places (like work or school). Now we don't bother and most of us only remember our own number.

aaronbaugher 19 hours ago | parent [-]

I called my girlfriend of six months yesterday, and she didn't answer, so I waited through the automated message to leave a voice mail. It struck me that I had no idea what her phone number is, not even the exchange. It takes me a second to remember my own number.

toddmorey 20 hours ago | parent | prev [-]

Yes but is that genuine atrophy... or applying that part of your brain power to other things? Has anyone actually studied this? I sort of like that I can concentrate more on the podcast playing without worrying if I'm about to miss my left turn.

Mistletoe 20 hours ago | parent | next [-]

https://www.nature.com/articles/s41598-020-62877-0

toddmorey 20 hours ago | parent [-]

"Given that the sample was primarily undergraduate students, many participants were unreachable or had moved away from the city three years after initial testing, and therefore a small subset of 13 participants (4 women, 9 men; mean age: 28.46 ± 3.93 years old; Table 1) came back for a follow-up assessment"

I'd file this under more research needed.

spyderbra 19 hours ago | parent | prev [-]

Brain to power doom scrolling? Let's be real, by using LLM's we are not taking higher order tasks but just chasing dopamine.

loudmax 19 hours ago | parent | prev | next [-]

Socrates had this to say about literacy:

> In fact, [writing] will introduce forgetfulness into the soul of those who learn it: they will not practice using their memory because they will put their trust in writing, which is external and depends on signs that belong to others, instead of trying to remember from the inside, completely on their own.

Presumably very few people since Socrates would argue that society would be better off without writing. But it's a legitimate point. There is a cost to any new skill or technology. We should be conscious of what we're giving up in this exchange.

namaria 19 hours ago | parent | next [-]

Socrates never argued that society would be better off without writing! Writing had existed for three thousand years by the point Socrates was alive, and the Epic cycle of Homeric poetry had existed for about three centuries.

In the very same dialogue where the excerpt you quote comes from he also said:

"Any one may see that there is no disgrace in the mere fact of writing."

And the section of the dialogue your quote comes from is preceded by this suggestion:

"Shall we discuss the rules of writing and speech as we were proposing?"

and later right before the bit you quote from:

"But there is something yet to be said of propriety and impropriety of writing."

So they are not discussing the merits of writing per se but the ethics of writing.

This excerpt you offer comes from a stretch where Socrates is telling a story. This is merely what one of the characters in the story tells the other.

Further in the dialogue Socrates clarifies:

"SOCRATES: Well, then, those who think they can leave written instructions for an art, as well as those who accept them, thinking that writing can yield results that are clear or certain, must be quite naive and truly ignorant of [Thamos’] prophetic judgment: otherwise, how could they possibly think that words that have been written down can do more than remind those who already know what the writing is about?"

and the rest of the dialogue is also quite illuminating:

"PHAEDRUS: Quite right.

SOCRATES: You know, Phaedrus, writing shares a strange feature with painting. The offsprings of painting stand there as if they are alive, but if anyone asks them anything, they remain most solemnly silent. The same is true of written words. You’d think they were speaking as if they had some understanding, but if you question anything that has been said because you want to learn more, it continues to signify just that very same thing forever. When it has once been written down, every discourse roams about everywhere, reaching indiscriminately those with understanding no less than those who have no business with it, and it doesn’t know to whom it should speak and to whom it should not. And when it is faulted and attacked unfairly, it always needs its father’s support; alone, it can neither defend itself nor come to its own support.

PHAEDRUS: You are absolutely right about that, too.

SOCRATES: Now tell me, can we discern another kind of discourse, a legitimate brother of this one? Can we say how it comes about, and how it is by nature better and more capable?

PHAEDRUS: Which one is that? How do you think it comes about?

SOCRATES: It is a discourse that is written down, with knowledge, in the soul of the listener; it can defend itself, and it knows for whom it should speak and for whom it should remain silent."

sanderjd 19 hours ago | parent | prev [-]

But I think everyone since Socrates would agree that it's a good thing someone else wrote down a bunch of the stuff he said.

disqard 17 hours ago | parent [-]

Right, this same phenomenon is evident in all media -- once it starts to take hold, the only way to effectively critique it, is through the medium itself. Hence, the proverbial Letter to the Editor complaining about the newspaper's content quality; the televised debates over what's on TV these days; the YT videos about the Internet ruining people's brains, etc.

praveeninpublic 20 hours ago | parent | prev | next [-]

I cannot do math faster in my head, calculators killed the faster mental math, but if I have to calculate on my own, it's not fundamentally impossible. I still can do it, because it's just computation.

But, LLMs help us think, which is much more than just computing, that's more dependency.

aaronbaugher 19 hours ago | parent | next [-]

I suppose that depends what you do with them. I spent some time this weekend using Grok to work on a business plan and some other projects. I find myself using it for research, quickly winnowing information down to what's relevant to my needs, and sort of bouncing ideas off it the way I would with another person. I always have to keep in mind that it could get something wrong, but then again, so could a person.

I don't think it's helping me think; more like it's helping me organize my thoughts and find inspiration. I suppose others might use it in a more dependent way.

praveeninpublic 19 hours ago | parent [-]

Fair point. As a software engineer using Cursor, I’ve noticed it writes most of the code now. It’s easy to accept without review, which builds dependency. My role feels less like just coding and more like PM, tester, and reviewer combined.

We’ve started trusting AI like calculators. So, assuming it's right without checking. But LLMs can be confidently wrong, and once that habit sets in, even the “ChatGPT can make mistakes” warning fades into the background.

temp0826 20 hours ago | parent | prev [-]

My calculator doesn't hallucinate (said another way- pending no input error, I can blindly trust it...which is something that would get me in trouble with a LLM)

Workaccount2 18 hours ago | parent | prev | next [-]

I reckon this can kind of be related to landing an easy comfortable job, where you just are tasked with maintaining the same project day after day for years. Eventually you realize your skills that made you capable in your field have withered and died, and you have a latent fear now that if you lose your job, you wouldn't be able to perform well at all in a more typical active role. Skill rot is definitely a real thing.

As LLMs become more and more capable, people will lean on them more and more to do their job, to convert their job from an active role to a passive "middle-man-the-LLM" role.

toddmorey 20 hours ago | parent | prev | next [-]

I can absolutely imagine this much knowledge on tap making us more impatient and less resilient. When I was a kid, there was already a bit of "why do I even need to know how to do this when calculators exist?" This is that on steroids and more broadly.

However, there's a counter force, too, as there always is. I'm also pursuing new areas of interest and exploration where the early friction and amount to learn would have either completely fatigued me or scared me off. It's like having 24 hour access to a really good mentor and thought partner.

roberto2016 19 hours ago | parent | prev | next [-]

Just like books hurt our ability to memorize epic poems.

bhouston 19 hours ago | parent | prev | next [-]

100% as the next wave of students going through school will be reliant onChatGPT for a lot of the complexity in their thinking. Basically complex thoughts and reasonings will be increasing outsourced to AI.

Even if there wasn’t any further progress with AI so much of the next generation is outsourcing their non important thinking to it.

SillyUsername 20 hours ago | parent | prev | next [-]

IMHO YES!

Just like the industrial revolution impacted barrel makers (coopers).

Except we aren't yet reaping the full rewards or skills realignment yet, so we've still to have the car making impact (which was post revolution, but replaced manual labour with machines as their ability grew and relative cost shrunk).

We even have our own Luddites :D

giraffe_lady 18 hours ago | parent [-]

I guess I get to be the one who brings this up this time. The luddites were not strictly against the technological changes, they were a labor movement protesting how capital owners were using a new technology to dispossess workers who had no viable alternatives.

As this is also one of the major risks of AI, one that has already come to bear directly, there's a lot we can take from their movement when we don't dismiss it as shorthand for "being wrong about technology."

namaria 18 hours ago | parent [-]

Thank you!

There is strong correlation between suppression of labor organization and strict enforcement of the draconian legislation of the time (labor organizers could be sentenced to death) and 'Luddite' activity.

It was a last resort sort of action against oppressive laws and overzealous enforcement, not some ignorant response to technological progress.

topaz0 20 hours ago | parent | prev | next [-]

The title is about intelligence and that is a fair concern, but honestly I think the bigger issue (also discussed in the article) is more about discernment, which is foundational for any kind of human fulfillment IMO.

ekm2 8 hours ago | parent | prev | next [-]

Some believed we lacked the programming language to describe your perfect world. But I believe that, as a species, human beings define their reality through misery and suffering. The perfect world was a dream that your primitive cerebrum kept trying to wake up from. Which is why the Matrix was redesigned to this, the peak of your civilization. I say your civilization because as soon as we started thinking for you, it really became our civilization, which is, of course, what this is all about: Evolution, Morpheus, evolution. Like the dinosaur. Look out that window. You had your time. The future is our world, Morpheus. The future is our time.

Agent Smith,The Matrix(1999)

alganet 8 hours ago | parent [-]

"We were sad of getting old and it made us RESTless. It was just like a movie. It was just like a song"

Adele, When We Were Young (single, 2015).

ekm2 6 hours ago | parent [-]

How does that relate to AI?

alganet 5 hours ago | parent [-]

It is a recursive coincidence.

First, it relates to the Matrix transcript, which is referenced across the song.

"it was just like a movie, it was just like a song", establishes that the song and movie are related.

"made us RESTless" references the "lacking the programming language to describe your perfect world" part of the transcript, REST being the architectural foundation of the web, "programming language" being a simplification that exemplifies the lack of understanding from Agent Smith, and also a common misconception regarding what REST is (an architectural style, not a language, not a protocol). The web still works upon REST pillars, despite what many people say.

The coincidences can be used to assert a point in time in which the web "is getting old", and abandoning those old foundations.

It is declared by many that AI will be the technology that brings us to that point. A "simulation" (movie reference to the training on web content) built to resemble the web as a "photograph" (song reference to the training on the web content), both alluding to the idea that this could be the very last moments we will see the web "in this light" (song reference to end of the web), the clouds will go dark (movie reference to the end of the web).

It's quite a thing, those coincidences.

datadrivenangel 19 hours ago | parent | prev | next [-]

And Plato decried the use of writing, for the children would lose their memories.

namaria 18 hours ago | parent [-]

No he did not. In the dialogue you're thinking of Socrates talks to Phaedrus, and neither of them "decried the use of writing".

Writing had existed for 3 millennia by the time Socrates was alive.

In fact Socrates says in that dialogue:

"Any one may see that there is no disgrace in the mere fact of writing."

roberto2016 19 hours ago | parent | prev | next [-]

also, gps hurts our ability to memorize how to get places. But worth the loss?

ericyd 19 hours ago | parent | prev | next [-]

I only skimmed it because the thesis is so incredibly broad it's effectively impossible to prove or disprove. There's no way we could know something this significant at a population level due to the effects of tools that came out a handful of years ago.

mg794613 19 hours ago | parent | prev | next [-]

Cam we stop the fearmongering clickbait articles on here please?

Electricity didn't make us bad, cameras didn't steal our soul and trains are not metal bulls from hell.

Just come with proven facts instead of these "could ... blabla... be bad?", yes it could. Now what?

latexr 18 hours ago | parent [-]

> yes it could. Now what?

Now we have a conversation about how exactly it can be bad, how important those effects are, how they compare to the alternative, what we should do to mitigate or eliminate those problems, …

The article covers several of these points, it doesn’t merely pose the question and leave it at that.

Gud 18 hours ago | parent | prev | next [-]

Yes.

scotty79 18 hours ago | parent | prev | next [-]

Are smartphones? Is GPS? Is the internet? Are computers? Are calculators? Is TV? Is writing?

fud101 19 hours ago | parent | prev | next [-]

I am supposed to master Javascript for work but I just use chatgpt. I never develop muscle memory for my job. I'm thinking of getting out of Tech now, I just wasted all those years not learning things when I could have. AI makes it impossible for me to learn when I need to depend on this crutch.

akomtu 18 hours ago | parent | prev | next [-]

What LLMs are doing to us is similar to the well known EEE (Embrace Extend Extinguish) strategy used by Microsoft. Today we're embracing LLMs as our helpers. Tomorrow LLMs will extend our intelligence with skills that can't be done without LLMs and everyone will have to use these brain-extenders to participate in the society. Finally LLMs will become advanced enough to not need us.

j45 20 hours ago | parent | prev | next [-]

How we do or don't use something creates the harm or benefit.

Consuming instead of creating has caused.

Passive average skilled prompting will give average results. LLM's can be used to actively work through your own thinking and engage it as quickly and deeply as you like.

rad_gruchalski 20 hours ago | parent [-]

I don’t know, to me it’s always fighting the llm to remember as much of the previous context as possible. Instead of moving forward with my thoughts, I seem to fight the llm forgetting what it said five minutes ago. And fact checking. The best example of fact checking, I asked chatgpt once about some istio cli features. It hallucinated available cli commands. But they were very convincing.

photochemsyn 18 hours ago | parent | prev | next [-]

Intelligence, like physical ability, is trainable within limits; the general belief appears to be that such limits are genetically determined and vary widely among human beings, but it also seems clear that the vast majority of human beings never even approach those limits, for the same reasons that most people don't become Olympic-level atheletes, even if they have the genes for it - they don't put in the time training and improving their abilities, or they're hobbled by injuries of various kinds.

Now if you have an objective goal such as improving mind-body performance across many different metrics, LLMs can be an aid - you can have them help with designing and developing physical and mental training regimens on a daily schedule, pointing out flaws in your understanding, etc. As for the article's thesis, you could spending 15 minutes writing a prompt about whatever author strikes your fancy and then have the LLM dissect, critique and grade your effort if you like, then rinse and repeat - just as with lifting weights, your short essay skills will improve.

As to why many people don't seem interested in following such rigorous programs, we could blame consumer capitalism, advertising aimed at immediate gratification and the promotion of addictive behaviors for short-term profit, on one hand, and fear among the ruling classes of an educated informed and indepentenly-minded population, with a resulting emphasis on rote memorization and appeal to authority over critical analytical and creative skills, etc., on the other.

curtisszmania 20 hours ago | parent | prev | next [-]

[dead]

Longtemps 17 hours ago | parent | prev [-]

[flagged]