| ▲ | 18al a day ago | |
The wish for an _AI revolution_ in learning seems to have been granted by a monkey's paw. Articles like this, or [0], or browsing r/teachers [1], or even talking close-ones in college, give a rather grim view of AI use. A para from from [0] makes it seem that students understand that LLM use doesn't lead to learning, but still do so. Do they not see effort put into learning worthwhile?
I myself use LLMs for learning (using ChatGPT's study mode for instance r.i.p)
and can see that there's a right way to use it—you reach for it when you hit a wall, not to avoid the friction of developing an understanding.From what I understand tho, most of LLM use for learning is just LLM used as a tool for cheating. Even tfa mentions something of the sort:
The article attributes _skill issue_ as part of the problem, but how much of that
is a motivation or awareness issue. How do you make student realize that learning is worth it?[0] https://arstechnica.com/science/2026/04/to-teach-in-the-time... | ||
| ▲ | JimsonYang a day ago | parent | next [-] | |
You never realize the beauty of just learning cool stuff in college and exploring around until youre like 26 and graduated for 4 years | ||
| ▲ | UncleMeat a day ago | parent | prev [-] | |
I don't even think it is a monkey's paw. That implies that it looked like it would solve a big problem in our initial estimation. But "students will use the cheating machine to cheat" was obvious from the release of ChatGPT3. There was never some period of time where AI looked like it was a net positive for students only to be revealed to have an unexpected harm. Even from the folks who claim to use LLMs to learn rather than cheat or avoid work, I've seen so many people admit that they are actually using it to harm themselves. "Oh, I only ask ChatGPT for the answer for really hard problems." Yeah man, doing the hard problems is how you learn. | ||