Remix.run Logo
qwertylicious 2 days ago

Yeah, sorry, no, I have to disagree.

We're seeing this broad trend in tech where we just want to shrug and say "gee wiz, the machine did it all on its own, who could've guessed that would happen, it's not really our fault, right?"

LLMs sharing dangerous false information, ATS systems disqualifying women at higher rates than men, black people getting falsely flagged by facial recognition systems. The list goes on and on.

Humans built these systems. Humans are responsible for governing those systems and building adequate safeguards to ensure they're neither misused nor misbehave. Companies should not be allowed to tech-wash their irresponsible or illegal behaviour.

If Facebook did indeed built a data pipeline and targeting advertising system that could blindly accept and monetize illegally acquired without any human oversight, then Facebook should absolutely be held accountable for that negligence.

pc86 2 days ago | parent [-]

What does the system look like where a human being individually verifies every pieces of data fed into an advertising system? Even taking the human out of the loop, how do you verify the "legality" of one piece of data vs. another coming from the same publisher?

None of your example have anything to do with the thing we're talking about, and are just meant to inflame emotional opinions rather than engender rational discussion about this issue.

qwertylicious 2 days ago | parent | next [-]

That's not my problem to solve?

If Facebook chooses to build a system that can ingest massive amounts of third party data, and cannot simultaneously develop a system to vet that data to determine if it's been illegally acquired, then they shouldn't build that system.

You're running under the assumption that the technology must exist, and therefore we must live with the consequences. I don't accept that premise.

Edit: By the way, I'm presenting this as an all-or-nothing proposition, which is certainly unreasonable, and I recognize that. KYC rules in finance aren't a panacea. Financial crimes still happen even with them in place. But they represent a best effort, if imperfect, attempt to acknowledge and mitigate those risks, and based on what we've seen from tech companies over the last thirty years, I think it's reasonable to assume Facebook didn't attempt similar diligence, particularly given a jury trial found them guilty of misbehaviour.

> None of your example have anything to do with the thing we're talking about, and are just meant to inflame emotional opinions rather than engender rational discussion about this issue.

Not at all. I'm placing this specific example in the broader context of the tech industry failing to a) consider the consequences of their actions, and b) escaping accountability.

That context matters.

myaccountonhn 2 days ago | parent | next [-]

I often think about what having accountability in tech would entail. These big tech companies only work because they can neglect support and any kind of oversight.

In my ideal world, platforms and their moderation would be more localized, so that individuals would have more power to influence it and also hold it accountable.

decisionsmatter 2 days ago | parent | prev [-]

It's difficult for me to parse what exactly your argument is. Facebook built a system to ingest third party data. Whether you feel that such technology should exist to ingest data and serve ads is, respectfully, completely irrelevant. Facebook requires any entity (e.g. the Flo app) to gather consent from their users to send user data into the ingestion pipeline per the terms of their SDK. The Flo app, in a phenomenally incompetent and negligent manner, not only sent unconsented data to Facebook, but sent -sensitive health data-. Facebook they did what Facebook does best, which is ingest this data _that Flo attested was not sensitive and collected with consent_ into their ads systems.

qwertylicious 2 days ago | parent [-]

So let's consider the possibilities:

#1. Facebook did everything they could to evaluate Flo as a company and the data they were receiving, but they simply had no way to tell that the data was illegally acquired and privacy-invading.

#2. Facebook had inadequate mechanisms for evaluating their partners, and that while they could have caught this problem they failed to do so, and therefore Facebook was negligent.

#3. Facebook turned a blind eye to clear red flags that should've caused them to investigate further, and Facebook was malicious.

Personally, given Facebook's past extremely egregious behaviour, I think it's most likely to be a combination of #2 and #3: inadequate mechanisms to evaluate data partners, and conveniently ignoring signals that the data was ill-gotten, and that Facebook is in fact negligent if not malicious. In either case Facebook should be held liable.

pc86 is taking the position that the issue is #1: that Facebook did everything they could, and still, the bad data made it through because it's impossible to build a system to catch this sort of thing.

If that's true, then my argument is that the system Facebook built is too easily abused and should be torn down or significantly modified/curtailed as it cannot be operated safely, and that Facebook should still be held liable for building and operating a harmful technology that they could not adequately govern.

Does that clarify my position?

decisionsmatter 2 days ago | parent | next [-]

No one is arguing that FB has not engaged in egregious and illegal behavior in the past. What pc86 and I are trying to explain is that in this instance, based on the details of the court docs, Facebook did not make a conscious decision to process this data. It just did. Because this data, combined with the billion+ data points that Facebook receives every single second, was sent to Facebook with the label that it was "consented and non-sensitive health data" when it most certainly was not consented and very sensitive health data. But this is the fault of Flo. Not Facebook.

You could argue that Facebook should be more explicit in asking developers to self-certify and label their data correctly, or not send it at all. You could argue that Facebook should bolster their signal detection when it receives data from a new apps for the first time. But to argue that a human at Facebook blindly built a system to ingest data illegally without any attempt to prevent it is a flawed argument, as there are many controls, many disclosures, and (I'm sure) many internal teams and systems designed exactly for the purpose of determining whether the data they receive is has the appropriate consents (which it did, that Flo sent to them). This case is very squarely #1 in your example and maybe a bit of #2.

2 days ago | parent | next [-]
[deleted]
ryandrake 2 days ago | parent | prev | next [-]

If FB is going to use the data, then it should have the responsibility to check whether they can legally use it. Having their supplier say "It's not sensitive health data, bro, and if it is, it's consented. Trust us" should not be enough.

To use an extreme example, if someone posts CSAM through Facebook and says "It's not CSAM, trust me bro" and Facebook publishes it, then both the poster and Facebook have done wrong and should be in trouble.

gruez 2 days ago | parent [-]

>To use an extreme example, if someone posts CSAM through Facebook and says "It's not CSAM, trust me bro" and Facebook publishes it, then both the poster and Facebook have done wrong and should be in trouble.

AFAIK that's only because of mandatory scanning laws for CSAM, which were only enacted recently. There's no such obligations for other sensitive data.

pixl97 2 days ago | parent | prev | next [-]

Mens rea vs actus reus.

In some crimes actus reus is what matters. For example if you're handling stolen goods (in the US) the law can repossess these goods and any gains from them, even if you had no idea they were stolen.

Tech companies try to absolve themselves of mens rea by making sure no one says anything via email or other documented process that could otherwise be used in discovery. "If you don't admit your product could be used for wrong doing, then it can't!"

shkkmo 2 days ago | parent | prev | next [-]

>Facebook did not make a conscious decision to process this data.

Yes, it did. When Facebook built the system and allowed external entities to feed it unvetted information without human oversight, that was a choice to process this data.

> without any attempt to prevent it is a flawed argument, as there are many controls, many disclosures, and (I'm sure) many internal teams and systems designed exactly for the purpose of determining whether the data they receive is has the appropriate consents

This seems like a giant assumption to make without evidence. Given the past bad behavior from Meta, they do not deserve this benefit of the doubt.

If those systems exist, they clearly failed to actually work. However, the court documents indicate that Facebook didn't build out systems to check if stuff is health data until afterwards.

Capricorn2481 2 days ago | parent | prev [-]

> Facebook did not make a conscious decision to process this data. It just did.

What everyone else is saying is what they did is illegal, and they did it automatically, which is worse. What you're describing was, in fact, built to do that. They are advertising to people based on the honor system of whoever submits the data pinky promising it was consensual. That's absurd.

changoplatanero 2 days ago | parent | prev [-]

"doing everything they could" is quite the high standard. Personally, I would only hold them to the standard of making a reasonable effort.

qwertylicious 2 days ago | parent [-]

Yup, fair. I tried to acknowledge that in my paragraph about KYC in a follow-up edit to one of my earlier comments, but I agree, the language I've been using has been intentionally quite strong, and sometimes misleadingly so (I tend to communicate using strong contrasts between opposites as a way to ensure clarity in my arguments, but reality inevitably lands somewhere in the middle).

const_cast 15 hours ago | parent | prev [-]

> What does the system look like where a human being individually verifies every pieces of data fed into an advertising system?

Probably what it looked like 20 years ago.

Also, relaredly, if there's no moral or ethical way to conduct your business model, that doesn't mean that you're off the hook.

The correct outcome is your business model burns to the ground. That's why I don't run a hitman business, even though it would be lucrative.

If mass scale automated targeted advertising cannot be done ethically, then it cannot be done at all. It shouldn't exist.