| ▲ | koliber 7 hours ago | |
This happens with human-generated executive summaries. They can omit seemingly-innocuous things, focus on certain areas, and frame numbers in ways that color the summary. It's always important to know who wrote the summary if you want to know how much heed to pay it. This is called bias, and every human has their own. Sometimes, the executive assistant wields a lot more power in an organization than it looks at first glance. What the author seems to be saying is that the system prompt can be used to instill bias in LLMs. | ||
| ▲ | skybrian 3 hours ago | parent | next [-] | |
Yes, that's brought up in the first part of the article. She goes on to discuss differing performance depending on the language being used and its effect on safety guards. Apparently some language models do quite a bit worse in some languages. (The language models tested aren't the latest ones.) | ||
| ▲ | otabdeveloper4 7 hours ago | parent | prev [-] | |
> What the author seems to be saying is that the system prompt can be used to instill bias in LLMs. That's, like, the whole point of system prompts. "Bias" is how they do what they do. | ||