| ▲ | FabCH 3 hours ago | ||||||||||||||||||||||
They did. EU Commission reported that the false positive rate was 13-20%. German police reported that 50% of all reports were wrong. The system is rubbish and the EU MEPs were quite open about wanting it to go away. | |||||||||||||||||||||||
| ▲ | bluGill 2 hours ago | parent | next [-] | ||||||||||||||||||||||
What is the false negative rate and total rates? Without those we are missing too much. If the false negative rate (saying fine but it isn't) then the whole thing is useless. If the total cases are a few hundred (either CASM isn't a problem or those doing it use other platforms cause they know they will be caught on these) I don't care much that some are false positives - odds are it didn't get me. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | throwaway89201 2 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
The report you're referring to by the European Commission [1] shows that the mass surveillance of Chat Control 1.0 is probably not very proportional. They even note themselves that "The available data are insufficient to provide a definitive answer to this question". However, the "13-20%" that you're quoting is a dishonest propaganda number itself. It's the false positive rate that a single small company (Yubo) reported. The reported false positive rates of other companies are between 0.32% and 1.5%, which is still a high error rate in absolute numbers. Just to be clear: the report itself is full of uncertainty, convenient half truths and false causality. They for example completely rely on Big Tech platforms themselves to count false positives when a moderation decision was reversed. Microsoft apparently even claims that no user ever appealed against a decision ("No appeals reported"). There is no independent investigation into the effectiveness of the regulation at all, while it is in direct conflict with fundamental rights and required to be proportional to its goals. The section about "children identified" is also a complete mess where most countries can't even report the most basic data, and it isn't clear if mass surveillance contributed anything to new cases at all. But somehow they still conclude "voluntary reporting in line with this Regulation appears to make a significant contribution to the protection of a large number of children", which seems extremely baseless. [1] https://www.europarl.europa.eu/RegData/docs_autres_instituti... | |||||||||||||||||||||||
| ▲ | SpicyLemonZest 3 hours ago | parent | prev [-] | ||||||||||||||||||||||
I'm sure a lot of HN commenters would agree that a CSAM detection system with a 13-20% false positive rate should be terminated, but we're not EU regulators. And you've got a sibling comment saying this would be malicious compliance, so even on HN it's not unanimous. Is there an example of a specific EU official, MEP, etc. explicitly stating that tech companies should not perform hash-based CSAM detection or should not perform CSAM detection at all? | |||||||||||||||||||||||
| |||||||||||||||||||||||