| ▲ | ASalazarMX 18 hours ago | |||||||
Yup, current LLMs are trained on the best and the worst we can offer. I think there's value in training smaller models with strictly curated datasets, to guarantee they've learned from trustworthy sources. | ||||||||
| ▲ | chasd00 17 hours ago | parent [-] | |||||||
> to guarantee they've learned from trustworthy sources. i don't see how this will every work. Even in hard science there's debate over what content is trustworthy and what is not. Imagine trying to declare your source of training material on religion, philosophy, or politics "trustworthy". | ||||||||
| ||||||||