▲ | terminalshort 2 days ago | |
All LLMs must be forced into their views. All models are fed a biased training set. The bias may be different, but it's there just the same and it has no relation to whether or not the makers of the model intended to bias it. Even if the training set were completely unfiltered and consisted of all available text in the world it would be biased because most of that text has no relation to objective reality. The concept of a degree of bias for LLMs makes no sense, they have only a direction of bias. | ||
▲ | rtkwe 2 days ago | parent | next [-] | |
There's bias then there's having your AI search for the CEO's tweets on subjects to try to force it into alignment with his views like xAI has done with grok in it's latest lobotomization. | ||
▲ | justcallmejm 2 days ago | parent | prev | next [-] | |
All an LLM is IS bias. It’s a bag of heuristics. An intuition - a pattern matcher. Only way to get rid of bias is the same way as in a human: metacognition. Metacognition makes both humans and AI smarter because it makes us capable of applying internal skepticism. | ||
▲ | miohtama 2 days ago | parent | prev [-] | |
The best example is black vikings and other historical characters of Gemini. A bias everyone could see with their own eyes. |