| ▲ | yogthos a day ago | |
I'd argue there's little rational for having the model talk down to people which isn't malicious. If the user doesn't understand the answer, they can explicitly ask the model to explain it in simpler terms. If you read through the study, it's pretty clear that this isn't just accidental bias from the training data, but rather intentional limiting of capability for specific groups of users. | ||