Remix.run Logo
pcf 6 hours ago

He said: "LLM is going to change schools and universities a lot"

You said: "No it won't. It really, really wont."

With the explosive development of LLMs and their abilities, it seems your point of view is probably the hopeful one while the other poster has the realistic one.

It seems that you simply can't say anything about what LLMs will not be able to do. Especially when you try to use current "AI slop" as your main reason, which is being more and more eradicated.

traceroute66 5 hours ago | parent [-]

> "AI slop" as your main reason, which is being more and more eradicated.

The slop is the hard truth.

As I made perfectly clear in my original post. My university professor friends get handed AI slop by their students each and every day.

There is no "eradication of slop" happening. If anything, it is getting worse. Trust me, my friends see the output from all the latest algorithms on their desk.

The students think they are being very clever, the students think the magical LLM is the best thing since sliced bread.

All the professor sees is a wall of slop on their desk and a student that is not learning how to reason and think with their own damn brain.

And when the professors tries politely and patiently to challenge them and test their understanding as you would expect in a university environment, the snowflake students just whine and complain because they know they've been caught out drinking the LLM kool-aid again for the 100th time this week.

Hence the student is wasting their time and money at university, and the professor is wasting their time trying to teach someone who is clearly not interested in learning because they think they can get the answer in 5 seconds from an LLM chatbot.

My professor friends chose the career they did because they enjoy the challenge of helping students along the way through their courses and watching them develop.

They are no longer seeing that same development in their students. And instead of devoting time to helping students, they are wasting time thinking up over-engineered fiendishly-complicated lab-tasks and tests that the students cannot cheat using LLM.

It is honestly a lose-lose situation for everybody.

culopatin 5 hours ago | parent | next [-]

I think you're missing the point. The conversation is not about what students give the professors, it's about how students learn. This obviously requires someone that wants to learn.

traceroute66 5 hours ago | parent | next [-]

> it's about how students learn. This obviously requires someone that wants to learn.

FINALLY ! Someone who gets the point I was trying to make. I wish I could upvote you a million times.

This is precisely the point. Professors are happy to help people who want to learn.

Students who prefer to copy/paste into LLMs do not want to learn. University is there to foster learning and reasoning using your own brain. An LLM helps with neither.

saltcured 4 hours ago | parent | prev [-]

Sweep aside the misunderstanding about students trying to "cheat" with LLM output instead of engagement in the topic at hand. I think there is a secondary debate here, even when you understand the original intent of the post above. It still boils down to the same concerns about "slop". Not the student presenting slop to the existing teaching system, but the student being led stray by the slop they are consuming on their own.

Being an auto-didact has always been a double-edged sword. You can potentially accelerate your learning and find your own specialization, but it is an extremely easy failure mode to turn yourself into some semi-educated crank. Once in a while, this leads to some renegade genius who opens new branches of knowledge. But in more cases, it aborts useful learning. The crank gets lost in their half-baked ontology and unable to really fix the flaws nor progress to more advanced topics.

The whole long history of learning institutions is, in part, trying to manage this very human risk. One of a teacher's main roles is to recognize a student who is spiraling out in this manner and steer them back. Nearly everyone has this potential to incrementally develop a sort of self-delusion, if not getting reality-checked on a regular basis. It takes incredible diligence to self-govern and never lose yourself in the chase.

This is where "sycophancy" in LLMs is a bigger problem than mere diction. If the AI continues to function as a sort of keyhole predictor, it does not have the context to model a big-picture purpose like education and keep all the incremental wanderings on course and bound to reality. Instead, it can amplify this worst-case scenario where you plunge down some rabbit-hole.

wiseowise 5 hours ago | parent | prev [-]

I sure hope those "university professor friends" exist, and you're not self-distancing. Because you really need help with the mindset like that. Students are not your enemies and LLMs are not ought to get you. Seek help.