Remix.run Logo
ForceBru 2 hours ago

> pushing back against preconceived notions and challenging users to reflect and evaluate

Who decides what needs to be "pushed back"? Also, I imagine it's not easy to train a model to notice these "preconceived notions" and react "appropriately": machine learning will automatically extract patterns from data, so if enough texts contain a "preconceived notion" that you don't like, it'll learn it anyway, so you'll have to manually clean the data (seems like extremely hard work and lowkey censorship) or do extensive "post-training".

It's not clear what it means to "challenge users to reflect and evaluate". Making the model analyze different points of view and add a "but you should think for yourself!" after each answer won't work because everyone will just skip this last part and be mildly annoyed. It's obvious that I should think for myself, but here's why I'm asking the LLM: I _don't_ want to think for myself right now, or I want to kickstart my thinking. Either way, I need some useful input from the LLM.

If the model refuses to answer and always tells me to reflect, I'll just go back to Google search and not use this model at all. In this case someone just wasted money on training the model.

eleventyseven 2 hours ago | parent | next [-]

> Who decides what needs to be "pushed back"?

Millions of teachers make these kinds of decisions every minute of every school day.

mhuffman an hour ago | parent | next [-]

So would your recommendation that each individual teacher put in their own guardrails or you try to get millions of teachers to agree?

ForceBru an hour ago | parent | prev [-]

True, but teachers don't train LLMs. Good LLMs can only be trained by massive corporations, so training an "LLM for schools" must be centralized. This should of course be supervised by the government, so the government ends up deciding what needs pushback and what kind of pushback. This alone is not easy because someone will have to enumerate the things that need pushback, provide examples of such "bad things", provide "correct" alternatives and so on. This then feeds into data curation and so on.

Teachers are also "local". The resulting LLM will have to be approved nation-wide, which is a whole can of worms. Or do we need multiple LLMs of this kind? How are they going to differ from each other?

Moreover, people will hate this because they'll be aware of it. There will be a government-approved sanitized "LLM for schools" that exhibits particular "correct" and "approved" behavior. Everyone will understand that "pushing back" is one of the purposes of the LLM and that it was made specifically for (indoctrination of) children. What is this, "1984" or whatever other dystopian novel?

Many of the things that may "need" pushback are currently controversial. Can a man be pregnant? "Did the government just explicitly allow my CHILD to talk to this LLM that says such vile things?!" (Whatever the "things" may actually be) I guarantee parents from all political backgrounds are going to be extremely mad.

NegativeK an hour ago | parent [-]

I think you're interpreting the commenter's/article's point in a way that they didn't intend. At all.

Assume the LLM has the answer a student wants. Instead of just blurting it out to the student, the LLM can:

* Ask the student questions that encourages the student to think about the overall topic.

* Ask the student what they think the right answer is, and then drill down on the student's incorrect assumptions so that they arrive at the right answer.

* Ask the student to come up with two opposing positions and explain why each would _and_ wouldn't work.

Etc.

None of this has to get anywhere near politics or whatever else conjured your dystopia. If the student asked about politics in the first place, this type of pushback doesn't have to be any different than current LLM behavior.

In fact, I'd love this type of LLM -- I want to actually learn. Maybe I can order one to actually try..

ForceBru 25 minutes ago | parent [-]

In fact, I agree with the article! For instance, many indeed offload thinking to LLMs, potentially "leading to the kind of cognitive decline or atrophy more commonly associated with aging brains". It also makes sense that students who use LLMs are not "learning to parse truth from fiction ... not learning to understand what makes a good argument ... not learning about different perspectives in the world".

Somehow "pushing back against preconceived notions" is synonymous to "correcting societal norms by means of government-approved LLMs" for me. This brings politics, dystopian worlds and so on. I don't want LLMs to "push back against preconceived notions" and otherwise tell me what to think. This is indeed just one sentence in the article, though.

1propionyl an hour ago | parent | prev [-]

> Also, I imagine it's not easy to train a model to notice these "preconceived notions" and react "appropriately"

Then don't. It's easy enough to pay a teacher a salary.

ForceBru an hour ago | parent [-]

Yep, fully agree with this