▲ | qwertylicious 2 days ago | ||||||||||||||||||||||||||||||||||||||||
So let's consider the possibilities: #1. Facebook did everything they could to evaluate Flo as a company and the data they were receiving, but they simply had no way to tell that the data was illegally acquired and privacy-invading. #2. Facebook had inadequate mechanisms for evaluating their partners, and that while they could have caught this problem they failed to do so, and therefore Facebook was negligent. #3. Facebook turned a blind eye to clear red flags that should've caused them to investigate further, and Facebook was malicious. Personally, given Facebook's past extremely egregious behaviour, I think it's most likely to be a combination of #2 and #3: inadequate mechanisms to evaluate data partners, and conveniently ignoring signals that the data was ill-gotten, and that Facebook is in fact negligent if not malicious. In either case Facebook should be held liable. pc86 is taking the position that the issue is #1: that Facebook did everything they could, and still, the bad data made it through because it's impossible to build a system to catch this sort of thing. If that's true, then my argument is that the system Facebook built is too easily abused and should be torn down or significantly modified/curtailed as it cannot be operated safely, and that Facebook should still be held liable for building and operating a harmful technology that they could not adequately govern. Does that clarify my position? | |||||||||||||||||||||||||||||||||||||||||
▲ | decisionsmatter 2 days ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||
No one is arguing that FB has not engaged in egregious and illegal behavior in the past. What pc86 and I are trying to explain is that in this instance, based on the details of the court docs, Facebook did not make a conscious decision to process this data. It just did. Because this data, combined with the billion+ data points that Facebook receives every single second, was sent to Facebook with the label that it was "consented and non-sensitive health data" when it most certainly was not consented and very sensitive health data. But this is the fault of Flo. Not Facebook. You could argue that Facebook should be more explicit in asking developers to self-certify and label their data correctly, or not send it at all. You could argue that Facebook should bolster their signal detection when it receives data from a new apps for the first time. But to argue that a human at Facebook blindly built a system to ingest data illegally without any attempt to prevent it is a flawed argument, as there are many controls, many disclosures, and (I'm sure) many internal teams and systems designed exactly for the purpose of determining whether the data they receive is has the appropriate consents (which it did, that Flo sent to them). This case is very squarely #1 in your example and maybe a bit of #2. | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
▲ | changoplatanero 2 days ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||
"doing everything they could" is quite the high standard. Personally, I would only hold them to the standard of making a reasonable effort. | |||||||||||||||||||||||||||||||||||||||||
|