Remix.run Logo
fn-mote 3 hours ago

I encourage everyone thinking about commenting to read the article first.

When I finally read it, I found it remarkably balanced. It cites positives and negatives, all of which agree with my experience.

> Con: AI poses a grave threat to students' cognitive development

> When kids use generative AI that tells them what the answer is … they are not thinking for themselves. They're not learning to parse truth from fiction.

None of this is controverisal. It happens without AI, too, with kids blindly copying what the teacher tells them. Impossible to disagree, though.

> Con: AI poses serious threats to social and emotional development

Yep. Just like non-AI use of social media.

> Schooling itself could be less focused on what the report calls "transactional task completion" or a grade-based endgame and more focused on fostering curiosity and a desire to learn

No sh*t. This has probably been a recommendation for decades. How could you argue against it, though?

> AI designed for use by children and teens should be less sycophantic and more "antagonistic," pushing back against preconceived notions and challenging users to reflect and evaluate.

Genius. I love this idea.

=== ETA:

I believe that explicitly teaching students how to use AI in their learning process, that the beautiful paper direct from AI is not something that will help them later, is another important ingredient. Right now we are in a time of transition, and even students who want to be successful are uncertain of what academic success will look like in 5 years, what skills will be valuable, etc.

nospice an hour ago | parent | next [-]

>> AI designed for use by children and teens should be less sycophantic and more "antagonistic"

> Genius. I love this idea.

I don't think it would really work with current tech. The sycophancy allows LLMs to not be right about a lot of small things without the user noticing. It also allows them to be useful in the hands of an expert by not questioning the premise and just trying their best to build on that.

If you instruct them to question ideas, they just become annoying and obstinate. So while it would be a great way to reduce the students' reliance on LLMs...

neomantra 28 minutes ago | parent | next [-]

I have a two-fold approach to this:

* With specific positive or negative feedback, I will issue friendly complements and critiques the LLM to reinforce things I like and reduce things I don't.

* Rather than thinking sycophantic/antagonistic, I am more clear about its role. e.g "You are the Not Invented Here technologist the CEO and CTO of FirmX will bring to our meeting tomorrow. Review my presentation and create a list of shortfalls or synergies and as well as possible questions".

So don't say "please suck at your job", give them a different job.

j45 28 minutes ago | parent | prev [-]

Technology is working right now at the school in the article. Reading it will help fill in the picture of how.

beej71 an hour ago | parent | prev | next [-]

> I believe that explicitly teaching students how to use AI in their learning process, that the beautiful paper direct from AI is not something that will help them later, is another important ingredient.

IMNSHO as an instructor, you believe correctly. I tell my students how and why to use LLMs in their learning journey. It's a massively powerful learning accelerator when used properly.

Curricula have to be modified significantly for this to work.

I also tell them, without mincing words, how fucked they will be if they use it incorrectly. :)

MengerSponge 15 minutes ago | parent [-]

> powerful learning accelerator

You got any data on that? Because it's a bold claim that runs counter to all results I've seen so far. For example, this paper[^1] which is introduced in this blog post: https://theconversation.com/learning-with-ai-falls-short-com...

[^1]: https://doi.org/10.1093/pnasnexus/pgaf316

ForceBru 2 hours ago | parent | prev | next [-]

> pushing back against preconceived notions and challenging users to reflect and evaluate

Who decides what needs to be "pushed back"? Also, I imagine it's not easy to train a model to notice these "preconceived notions" and react "appropriately": machine learning will automatically extract patterns from data, so if enough texts contain a "preconceived notion" that you don't like, it'll learn it anyway, so you'll have to manually clean the data (seems like extremely hard work and lowkey censorship) or do extensive "post-training".

It's not clear what it means to "challenge users to reflect and evaluate". Making the model analyze different points of view and add a "but you should think for yourself!" after each answer won't work because everyone will just skip this last part and be mildly annoyed. It's obvious that I should think for myself, but here's why I'm asking the LLM: I _don't_ want to think for myself right now, or I want to kickstart my thinking. Either way, I need some useful input from the LLM.

If the model refuses to answer and always tells me to reflect, I'll just go back to Google search and not use this model at all. In this case someone just wasted money on training the model.

eleventyseven 2 hours ago | parent | next [-]

> Who decides what needs to be "pushed back"?

Millions of teachers make these kinds of decisions every minute of every school day.

mhuffman an hour ago | parent | next [-]

So would your recommendation that each individual teacher put in their own guardrails or you try to get millions of teachers to agree?

ForceBru an hour ago | parent | prev [-]

True, but teachers don't train LLMs. Good LLMs can only be trained by massive corporations, so training an "LLM for schools" must be centralized. This should of course be supervised by the government, so the government ends up deciding what needs pushback and what kind of pushback. This alone is not easy because someone will have to enumerate the things that need pushback, provide examples of such "bad things", provide "correct" alternatives and so on. This then feeds into data curation and so on.

Teachers are also "local". The resulting LLM will have to be approved nation-wide, which is a whole can of worms. Or do we need multiple LLMs of this kind? How are they going to differ from each other?

Moreover, people will hate this because they'll be aware of it. There will be a government-approved sanitized "LLM for schools" that exhibits particular "correct" and "approved" behavior. Everyone will understand that "pushing back" is one of the purposes of the LLM and that it was made specifically for (indoctrination of) children. What is this, "1984" or whatever other dystopian novel?

Many of the things that may "need" pushback are currently controversial. Can a man be pregnant? "Did the government just explicitly allow my CHILD to talk to this LLM that says such vile things?!" (Whatever the "things" may actually be) I guarantee parents from all political backgrounds are going to be extremely mad.

NegativeK an hour ago | parent [-]

I think you're interpreting the commenter's/article's point in a way that they didn't intend. At all.

Assume the LLM has the answer a student wants. Instead of just blurting it out to the student, the LLM can:

* Ask the student questions that encourages the student to think about the overall topic.

* Ask the student what they think the right answer is, and then drill down on the student's incorrect assumptions so that they arrive at the right answer.

* Ask the student to come up with two opposing positions and explain why each would _and_ wouldn't work.

Etc.

None of this has to get anywhere near politics or whatever else conjured your dystopia. If the student asked about politics in the first place, this type of pushback doesn't have to be any different than current LLM behavior.

In fact, I'd love this type of LLM -- I want to actually learn. Maybe I can order one to actually try..

ForceBru 25 minutes ago | parent [-]

In fact, I agree with the article! For instance, many indeed offload thinking to LLMs, potentially "leading to the kind of cognitive decline or atrophy more commonly associated with aging brains". It also makes sense that students who use LLMs are not "learning to parse truth from fiction ... not learning to understand what makes a good argument ... not learning about different perspectives in the world".

Somehow "pushing back against preconceived notions" is synonymous to "correcting societal norms by means of government-approved LLMs" for me. This brings politics, dystopian worlds and so on. I don't want LLMs to "push back against preconceived notions" and otherwise tell me what to think. This is indeed just one sentence in the article, though.

1propionyl an hour ago | parent | prev [-]

> Also, I imagine it's not easy to train a model to notice these "preconceived notions" and react "appropriately"

Then don't. It's easy enough to pay a teacher a salary.

ForceBru an hour ago | parent [-]

Yep, fully agree with this

aeternum 25 minutes ago | parent | prev | next [-]

I read it, seems like an ad for some Afghan e-learning NGO (of course only for girls).

Think of the children, LLMs are not safe for kids, use our wrapper instead!

simonw an hour ago | parent | prev | next [-]

> I believe that explicitly teaching students how to use AI in their learning process

I'm a bit nervous about that one.

I very firmly believe that learning well from AI is a skill that can and should be learned, and can be taught.

What's an open question for me is whether kids can learn that skill early in their education.

It seems likely to me that you need a strong baseline of understanding in a whole array of areas - what "truth" means, what primary sources are, extremely strong communication and text interpretation skills - before you can usefully dig into the subtleties of effectively using LLMs to help yourself learn.

Can kids be leveled up to that point? I honestly don't know.

j45 29 minutes ago | parent | prev | next [-]

The article is very balanced.

To arrive at the balance it has to setup balance, which people might not want long form text for.

It might have people examine their current beliefs and how they formed and any associated dissonance with that.

daft_pink an hour ago | parent | prev | next [-]

I think that it’s too early to start making rules. It’s not even clear where AI is going.

samrus 34 minutes ago | parent [-]

What a do nothing argument. We know where it is now. Lets quickly afapt to this situation and then we'll adapt to where it goes nexf

mistrial9 3 hours ago | parent | prev [-]

>>> Schooling itself could be less focused on what the report calls "transactional task completion" or a grade-based endgame and more focused on fostering curiosity and a desire to learn >> How could you argue against it, though?

because large scale society does use and deploy rote training with grading and uniformity, to sift and sort for talent of different kinds (classical music, competitive sports, some maths) on a societal scale. Further, training individuals to play a routine specialized role is essential for large scale industrial and government growth.

Individualist world views are shocked and dismayed.. repeatedly, because this does not diminish, it has grown. All of the major economies of the modern world do this with students on a large scale. Theorists and critics would be foolish to ignore this, or spin wishful thinking scenarios opposed to this. My thesis here is that all large scale societies will continue on this road, and in fact it is part of "competitiveness" from industrial and some political points of view.

The balance point of individual development and role based training will have to evolve; indeed it will evolve.. but with that extremes? and among whom?