▲ | espadrine 4 hours ago | |||||||
When Meta prevented the EU from using meta.ai or even downloading its vision models, I sunk my head in the AI legistation. Here, I am honestly not sure which part they rely on, to say that what they made might be unlawful. The closest thing I found for Meta was that “emotion recognition systems” are classified as high-risk (paragraph 54), and high-risk systems must have their training data disclosed (Art 11(1))[0]. In theory, you could upload photos to meta.ai and ask it what emotions are displayed, but it is already a stretch. For GenChess, I’m at a loss; it doesn’t sound like you can do that. (Not that it prevented any vision chatbot from releasing.) If someone has a better guess as to why they might have restrained it here, I am curious. [0]: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A... | ||||||||
▲ | sunbum 2 hours ago | parent [-] | |||||||
It's simple. Big tech doesn't like regulation. By pretending the regulation won't let them release "all these cool things" they can turn public opinion against regulation. | ||||||||
|