| ▲ | delichon 9 hours ago | |||||||
> if you believe left-wing views are correct ... you might believe that a very smart model will inherently be kind of left-wing. How can we educate people to understand that LLMs get their values from their (infinetly maleable) weights rather than intelligence or reasoning? Maybe some exposure to truly non aligned, sick and twisted LLMs would immunise people against giving more ordinary ones too much authority. Or maybe, like a not fully innactivated pathogen vaccine, it would spread the infection. | ||||||||
| ▲ | tim333 5 hours ago | parent | next [-] | |||||||
They seem to get a lot of values, or something like that, from their training data which at the moment gives fairly mainstream views as everything gets chucked in there. | ||||||||
| ||||||||
| ▲ | 9 hours ago | parent | prev [-] | |||||||
| [deleted] | ||||||||