| ▲ | fn-mote 3 hours ago | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
I encourage everyone thinking about commenting to read the article first. When I finally read it, I found it remarkably balanced. It cites positives and negatives, all of which agree with my experience. > Con: AI poses a grave threat to students' cognitive development > When kids use generative AI that tells them what the answer is … they are not thinking for themselves. They're not learning to parse truth from fiction. None of this is controverisal. It happens without AI, too, with kids blindly copying what the teacher tells them. Impossible to disagree, though. > Con: AI poses serious threats to social and emotional development Yep. Just like non-AI use of social media. > Schooling itself could be less focused on what the report calls "transactional task completion" or a grade-based endgame and more focused on fostering curiosity and a desire to learn No sh*t. This has probably been a recommendation for decades. How could you argue against it, though? > AI designed for use by children and teens should be less sycophantic and more "antagonistic," pushing back against preconceived notions and challenging users to reflect and evaluate. Genius. I love this idea. === ETA: I believe that explicitly teaching students how to use AI in their learning process, that the beautiful paper direct from AI is not something that will help them later, is another important ingredient. Right now we are in a time of transition, and even students who want to be successful are uncertain of what academic success will look like in 5 years, what skills will be valuable, etc. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | nospice an hour ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
>> AI designed for use by children and teens should be less sycophantic and more "antagonistic" > Genius. I love this idea. I don't think it would really work with current tech. The sycophancy allows LLMs to not be right about a lot of small things without the user noticing. It also allows them to be useful in the hands of an expert by not questioning the premise and just trying their best to build on that. If you instruct them to question ideas, they just become annoying and obstinate. So while it would be a great way to reduce the students' reliance on LLMs... | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | beej71 an hour ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
> I believe that explicitly teaching students how to use AI in their learning process, that the beautiful paper direct from AI is not something that will help them later, is another important ingredient. IMNSHO as an instructor, you believe correctly. I tell my students how and why to use LLMs in their learning journey. It's a massively powerful learning accelerator when used properly. Curricula have to be modified significantly for this to work. I also tell them, without mincing words, how fucked they will be if they use it incorrectly. :) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | ForceBru 2 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
> pushing back against preconceived notions and challenging users to reflect and evaluate Who decides what needs to be "pushed back"? Also, I imagine it's not easy to train a model to notice these "preconceived notions" and react "appropriately": machine learning will automatically extract patterns from data, so if enough texts contain a "preconceived notion" that you don't like, it'll learn it anyway, so you'll have to manually clean the data (seems like extremely hard work and lowkey censorship) or do extensive "post-training". It's not clear what it means to "challenge users to reflect and evaluate". Making the model analyze different points of view and add a "but you should think for yourself!" after each answer won't work because everyone will just skip this last part and be mildly annoyed. It's obvious that I should think for myself, but here's why I'm asking the LLM: I _don't_ want to think for myself right now, or I want to kickstart my thinking. Either way, I need some useful input from the LLM. If the model refuses to answer and always tells me to reflect, I'll just go back to Google search and not use this model at all. In this case someone just wasted money on training the model. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | aeternum 25 minutes ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
I read it, seems like an ad for some Afghan e-learning NGO (of course only for girls). Think of the children, LLMs are not safe for kids, use our wrapper instead! | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | simonw an hour ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
> I believe that explicitly teaching students how to use AI in their learning process I'm a bit nervous about that one. I very firmly believe that learning well from AI is a skill that can and should be learned, and can be taught. What's an open question for me is whether kids can learn that skill early in their education. It seems likely to me that you need a strong baseline of understanding in a whole array of areas - what "truth" means, what primary sources are, extremely strong communication and text interpretation skills - before you can usefully dig into the subtleties of effectively using LLMs to help yourself learn. Can kids be leveled up to that point? I honestly don't know. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | j45 29 minutes ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
The article is very balanced. To arrive at the balance it has to setup balance, which people might not want long form text for. It might have people examine their current beliefs and how they formed and any associated dissonance with that. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | daft_pink an hour ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
I think that it’s too early to start making rules. It’s not even clear where AI is going. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | mistrial9 3 hours ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
>>> Schooling itself could be less focused on what the report calls "transactional task completion" or a grade-based endgame and more focused on fostering curiosity and a desire to learn >> How could you argue against it, though? because large scale society does use and deploy rote training with grading and uniformity, to sift and sort for talent of different kinds (classical music, competitive sports, some maths) on a societal scale. Further, training individuals to play a routine specialized role is essential for large scale industrial and government growth. Individualist world views are shocked and dismayed.. repeatedly, because this does not diminish, it has grown. All of the major economies of the modern world do this with students on a large scale. Theorists and critics would be foolish to ignore this, or spin wishful thinking scenarios opposed to this. My thesis here is that all large scale societies will continue on this road, and in fact it is part of "competitiveness" from industrial and some political points of view. The balance point of individual development and role based training will have to evolve; indeed it will evolve.. but with that extremes? and among whom? | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||