▲ | throwaway290 2 days ago | |||||||
If you can't tell when a big expensive llm is subliminally grooming you to like/dislike something or is selective with information then another (probably small and cheaper) llm somehow can? Arms race? | ||||||||
▲ | chasd00 2 days ago | parent | next [-] | |||||||
> If you can't tell when a big expensive llm is subliminally grooming you to like/dislike something or is selective with information this is already there and in prod but called AI "safety" (really corporate brand safety). The largest LLMs have already been shown to favor certain political parties based on the preferences of the group doing the training as well. Even technical people who should know better naively trust the response of an LLM well enough to allow to make API calls on their behalf. What would prevent an LLM provider to train their model to learn and manipulate an API to favor them or a "trusted partner" in some way? It's just like in the early days, "it's on the Internet, it has to be true". | ||||||||
▲ | lelanthran 2 days ago | parent | prev [-] | |||||||
> If you can't tell when a big expensive llm is subliminally grooming you to like/dislike something or is selective with information I mean, I can tell when a page contains advertisements, but I still use an ad-blocker. The point was not to help me detect when a response is ad-heavy, but to stop me seeing those ads at all. > Arms race? Possibly. Like with ad-blockers, this race can't be won by the ad-pusher LLM if the user uses the ad-blocker LLM. The only reason ad-pusher websites still work is because users generally don't care enough to install the ad-blocker. In much the same way, the only reason LLM ad-pushers will work is if users don't bother with an LLM ad-blocker. | ||||||||
|