| ▲ | seg_lol a day ago | |
Thank God! And here we were stressed out that LLMs might democratize access to tech and knowledge. All of those biases are still there even for the harvard bio. If from the exchange it thinks you might be presenting as something you are not, output degradation. That said, in the beginning of my prompts I tell it exactly the persona of the target audience for the answer. Otherwise how would it know if it is explaining to a 5 year old or a phd in an adjacent domain? As always the problem is the training data and how these models don't get to decide how they interpret the training data. | ||
| ▲ | yogthos a day ago | parent [-] | |
I'd argue there's little rational for having the model talk down to people which isn't malicious. If the user doesn't understand the answer, they can explicitly ask the model to explain it in simpler terms. If you read through the study, it's pretty clear that this isn't just accidental bias from the training data, but rather intentional limiting of capability for specific groups of users. | ||