| ▲ | ForceBru 2 hours ago | |||||||||||||||||||||||||||||||
> pushing back against preconceived notions and challenging users to reflect and evaluate Who decides what needs to be "pushed back"? Also, I imagine it's not easy to train a model to notice these "preconceived notions" and react "appropriately": machine learning will automatically extract patterns from data, so if enough texts contain a "preconceived notion" that you don't like, it'll learn it anyway, so you'll have to manually clean the data (seems like extremely hard work and lowkey censorship) or do extensive "post-training". It's not clear what it means to "challenge users to reflect and evaluate". Making the model analyze different points of view and add a "but you should think for yourself!" after each answer won't work because everyone will just skip this last part and be mildly annoyed. It's obvious that I should think for myself, but here's why I'm asking the LLM: I _don't_ want to think for myself right now, or I want to kickstart my thinking. Either way, I need some useful input from the LLM. If the model refuses to answer and always tells me to reflect, I'll just go back to Google search and not use this model at all. In this case someone just wasted money on training the model. | ||||||||||||||||||||||||||||||||
| ▲ | eleventyseven 2 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||
> Who decides what needs to be "pushed back"? Millions of teachers make these kinds of decisions every minute of every school day. | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
| ▲ | 1propionyl an hour ago | parent | prev [-] | |||||||||||||||||||||||||||||||
> Also, I imagine it's not easy to train a model to notice these "preconceived notions" and react "appropriately" Then don't. It's easy enough to pay a teacher a salary. | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||