▲ | superkuh 2 days ago | |||||||
Mostly it's just formalizing of the established status quo. But the changes re: allowing training on chat logs has caused some unintended consequences. For one, now the classic IRC megahal bots which have been around for decades are technically not allowed unless you get permission from Libera staff (and the channel ops). They are markov chains that continuously train on channel contents as they operate. But hopefully, as in the past, the Libera staffers will intelligently enforce the spirit of the rules and avoid any silly situations like the above caused by imprecise language. | ||||||||
▲ | comex 2 days ago | parent | next [-] | |||||||
By its wording, the policy is specifically about training LLMs. A classic Markov chain may be a language model, but it’s not a large language model. The same rules might not apply. | ||||||||
| ||||||||
▲ | martin-t 2 days ago | parent | prev [-] | |||||||
A classic example of a community self regulating until overwhelmed at which point rules are imposed which bad previously accepted and harmless behavior. Rules must take scale into account and do it explicitly to avoid selective enforcement. There's a difference between one person writing a simple bot and a large corporation offering a bot pretending to be human to everyone. The first is harmless and fun, the second is a large scale for-profit behavior with proportionally large negative externalities. |