| ▲ | KaiserPro 5 hours ago | |
Again its all about reasonable. Firstly does the open model explicitly/tacitly allow CSAM generation? Secondly, when the trainers are made aware of the problem, do they ignore it or attempt to put in place protections? Thirdly, do they pull in data that is likely to allow that kind of content to be generated? Fourthly, when they are told that this is happening, do they pull the model? Fithly, do they charge for access/host the service and allow users to generate said content on their own servers? | ||