| ▲ | mleroy 9 hours ago | |
Ontologically, this historical model understands the categories of "Man" and "Woman" just as well as a modern model does. The difference lies entirely in the attributes attached to those categories. The sexism is a faithful map of that era's statistical distribution. You could RAG-feed this model the facts of WWII, and it would technically "know" about Hitler. But it wouldn't share the modern sentiment or gravity. In its latent space, the vector for "Hitler" has no semantic proximity to "Evil". | ||
| ▲ | arowthway 7 hours ago | parent [-] | |
I think much of the semantic proximity to evil can be derived straight from the facts? Imagine telling pre-1913 person about the holocaust. | ||