Remix.run Logo
gruez 2 days ago

As much as I don't like facebook as a company, I think the jury reached the wrong decision here. If you read the complaint[1], "eavesdropped on and/or recorded their conversations by using an electronic device" basically amounted to "flo using facebook's sdk and sending custom events to it" (page 12, point 49). I agree that flo should be raked over the coals for sending this information to facebook in the first place, but ruling that facebook "intentionally eavesdropped" (exact wording from the jury verdict) makes zero sense. So far as I can tell, flo sent facebook menstrual data without facebook soliciting it, and facebook specifically has a policy against sending medical/sensitive information using its SDK[2]. Suing facebook makes as much sense as suing google because it turned out a doctor was using google drive to store patient records.

[1] https://www.courtlistener.com/docket/55370837/1/frasco-v-flo...

[2] https://storage.courtlistener.com/recap/gov.uscourts.cand.37... page 6, line 1

gpm 2 days ago | parent | next [-]

At the time of [1 (your footnote)] the only defendant listed in the matter was Flo, not Facebook, per the cover page of [1], so it is unsurprising that that complaint does not include allegations against Facebook.

The amended complaint, [3], includes the allegations against Facebook as at that time Facebook was added as a defendant to the case.

Amongst other things the amended complaint points out that Facebook's behavior lasted for years (into 2021) after it was publicly disclosed that this was happening (2019), and then even after Flo was forced to cease the practice by the FTC, and congressional investigations were launched (2021) it refused to review and destroy the data that had previously been improperly collected.

I'd also be surprised if discovery didn't provide further proof that Facebook was aware of the sort of data they were gathering here...

[3] https://storage.courtlistener.com/recap/gov.uscourts.cand.37...

gruez 2 days ago | parent [-]

>At the time of [1 (your footnote)] the only defendant listed in the matter was Flo, not Facebook, per the cover page of [1], so it is unsurprising that that complaint does not include allegations against Facebook.

Are you talking about this?

>As one of the largest advertisers in the nation, Facebook knew that the data it received

>from Flo Health through the Facebook SDK contained intimate health data. Despite knowing this,

>Facebook continued to receive, analyze, and use this information for its own purposes, including

>marketing and data analytics.

Maybe something came up in discovery that documents the extent of this, but this doesn't really prove much. The plaintiffs are just assuming because there's a clause in ToS saying so, facebook must be using the data for advertising.

gpm 2 days ago | parent [-]

No...

In the part of my post that you quoted I'm literally just talking about the cover page of [1] where the defendants are listed, and at the time only Flo is listed. So nothing against Facebook/Meta is being alleged in [1]. They got added to the suit sometime between that document and [3] - at a glance probably as part of consolidating some other case with this one.

Reading [1] for allegations against Facebook doesn't make any sense, because it isn't supposed to include those.

gruez 2 days ago | parent [-]

>Reading [1] for allegations against Facebook doesn't make any sense, because it isn't supposed to include those.

The quote from my previous comment was taken from the amended complaint ([3]) that you posted. Skimming that document it's unclear what facebook actually did between 2019 and 2021. The complaint only claims flo sent data to facebook between 2016 and 2019, and after a quick skim the only connection I could find for 2021 is a report published in 2021 slamming the app's privacy practices, but didn't call out facebook in particular.

gpm a day ago | parent [-]

Ah, sorry, the paragraphs in [3] I'm looking at are

21 - For the claim that there was public reporting that Facebook was presumably aware of in 2019.

26 - For the claim that in February 2021 Facebook refused to review and destroy the data they had collected from Flo to that date, and thus presumably still had and were deriving value from the data.

I can't say I read the whole thing closely though.

jlarocco 2 days ago | parent | prev | next [-]

That's only the first part of the story, though.

Facebook isn't guilty because Flo sent medical data through their SDK. If they were just storing it or operating on it for Flo, then the case probably would have ended differently.

Facebook is guilty because they turned around and used the medical data themselves to advertise without checking if it was legal to do so. They knew, or should have known, that they needed to check if it was legal to use it, but they didn't, so they were found guilty.

gruez 2 days ago | parent | next [-]

>Facebook is guilty because they turned around and used the medical data themselves to advertise without checking if it was legal to do so.

What exactly did this entail? I haven't read all the court documents, but at least in the initial/amended complaint the plaintiffs didn't make this argument, probably because it's totally irrelevant to the charge of whether they "intentionally eavesdropped" or not. Either they were eavesdropping or not. Whether they were using it for advertising purposes might be relevant in armchair discussions about meta is evil or not, but shouldn't be relevant when it comes to the eavesdropping charge.

>They knew, or should have known, that they needed to check if it was legal to use it

What do you think this should look like?

thinkingtoilet 2 days ago | parent | next [-]

Should large corporations be able to break the law because it's too hard for them to manage their data? Should they be immune from law suits because actively moderating their product would hurt their business model? Does Facebook have a right to exist?

You know exactly what it would look like. It would look like Facebook being legally responsible for using the data they get. If they are too big to do that or are getting too much data to do that, the answer isn't to let them off the hook. Also, lets not pretend Facebook doesn't have a 15 year history of actively misusing data. This is not a one off event.

gruez 2 days ago | parent [-]

>Should large corporations be able to break the law because [...]

No, because this is begging the question. The point being disputed is whether facebook offering a SDK and analytics service counts as "intentionally eavesdropping". Anyone with a bit of understanding of how SDKs work should think it's not. If you told your menstrual secrets to a friend, and that friend then told me, that's not "eavesdropping" to any sane person, but that's essentially what the jury ruled here.

I might be sympathetic if facebook was being convicted of "trafficking private information" or whatever, but if that's not a real crime, we shouldn't be using "intentionally eavesdropping" as a cudgel against it just because we hate it. That goes against the whole concept of rule of law.

jjulius 2 days ago | parent | prev | next [-]

>What do you think this should look like?

My honest answer that I know is impossible:

Targeted advertising needs to die entirely.

const_cast 16 hours ago | parent [-]

I don't think it's impossible. If it's too hard to do something legally, then the solution is don't do it.

Like, for example, running a gambling operation is very risky and has a high compliance barrier. So most companies just don't. In fact, most B2B won't even sell to gambling companies depending on what exactly they're selling.

banannaise 2 days ago | parent | prev | next [-]

> What do you think this should look like?

Institutions that handle sensitive data that is subject to access regulations generally have a compliance process that must be followed prior to accessing and using that data, and a compliance department staffed with experts who review and approve/deny access requests.

But Facebook would rather move fast, break things, pay some fines, and reap the benefits of their illegal behavior.

gruez 2 days ago | parent [-]

>Institutions that handle sensitive data that is subject to access regulations generally have a compliance process that must be followed prior to accessing and using that data, and a compliance department staffed with experts who review and approve/deny access requests.

Facebook isn't running an electronic medical records business. It has no expectation that it's going to be receiving sensitive data, and specifically discourages it. What more are you expecting? That any company dealing with bits should have a moderation team poring over all records to make sure they don't contain "sensitive data"?

>But Facebook would rather move fast, break things, pay some fines, and reap the benefits of their illegal behavior.

Running an analytics service that allows apps to send arbitrary events is "move fast, break things" now?

const_cast 16 hours ago | parent | next [-]

Wether you are a medical records processing service doesn't depend on self-identity, it depends on if you process medical data.

Evidently Facebook does use medical data for targeted advertising. So they are a medical records business.

whstl a day ago | parent | prev [-]

Is this a simple hosted analytics service, where outputs are only accessible by Flo, or does Facebook uses this data in any other meaningful way?

If this is used by targeting, I’m afraid we can’t call this just an “analytics service”.

SilasX 2 days ago | parent | prev | next [-]

Yeah, I'm not sure if I'm missing something, and I don't like to defend FB, but ...

AIUI, they have a system for using data they receive to target ads. They tell people not to put sensitive data in it. Someone does anyway, and it gets automatically picked up to target ads. What are they supposed to do on their end? Even if they apply heuristics for "probably sensitive data we shouldn't use"[1], some stuff is still going to get through. The fault should still lie with the entity that passed on the sensitive data.

An analogy might be that you want to share photos of an event you hosted, and you tell people to send in their pics, while enforcing the norm, "oh make sure to ask before taking someone's photo", and someone insists that what they sent in was compliant with that rule, when it wasn't. And then you share them.

[1] Edit: per your other comment, they indeed had such heuristics: https://news.ycombinator.com/item?id=44901198

jlarocco 2 days ago | parent | next [-]

It doesn't work like that, though.

Companies don't get to do whatever they want just because they didn't put any safegaurds in place to prevent illegally using the data they collected.

The correct answer is to look at the data and verify it's legal to use.

I might be sympathetic of a tiny startup who has increased costs, but it's a cost of doing business just like anything else. And Facebook has more than enough resources to put safegaurds in place, and they definitely should have known better by now, so they should get punished for not complying.

SilasX 2 days ago | parent [-]

> The correct answer is to look at the data and verify it's legal to use.

So repeal Section 230 and require every site to manually evaluate all content uploaded for legality before doing anything with it? If it’s not reasonable to ask sites to do that, it’s not reasonable to ask FB to do the same for data you send them.

Your position seems to vary based on how big/sympathetic the company in question is, which is not very even-handed and implicitly recognizes the burden of this kind of ask.

const_cast 16 hours ago | parent [-]

Not before doing anything with it, just before processing it for specific business use cases like targeting.

Running a forumn is fine, and I don't care if someone inputs a fake SSN on a forumn post.

I DO care if someone inputs a fake SSN on a financial form I provided, and it is actually my responsibility to prevent that. That's what KYC is and more.

mlyle 2 days ago | parent | prev | next [-]

The problem is, the opposite approach is...

"We're scot free, because we told *wink* people to not sell us sensitive data. We get the benefit from it, and we make it really easy for people to sign up and get paid to give us this data that we 'don't want.'"

Please don't sell me cocaine *snifffffffff*

> The fault should still lie with the entity that passed on the sensitive data.

Some benefits to making it be both:

* Centralize enforcement with more knowledgable entities

* Enforce at a level where the misdeeds can actually be identified and have scale, rather than death from a million cuts

* Prevent the central entity from using deniable proxies and cut-throughs to do bad things

This whole notion that we want so much scale, and that scale is an excuse for not paying attention to what you're doing or exercising due diligence, is repugnant. It pushes some cost down but also causes a lot of social harm. If anything, we should expect more ownership and responsibility from those with concentrated power, because they have more ability to cause widescale harm.

gruez 2 days ago | parent [-]

>"We're scot free, because we told wink people to not sell us sensitive data. We get the benefit from it, and we make it really easy for people to sign up and get paid to give us this data that we 'don't want.'"

>Please don't sell me cocaine snifffffffff

Maybe there's something in discovery that substantiates this, but so far as I can tell there's no "wink" happening, officially or unofficially. A better analogy would be charging amazon with drug distributing because some enterprising drug dealer decided to use FBA to ship drugs, but amazon was unaware.

mlyle a day ago | parent [-]

Facebook gets a fiscal benefit when the counterparty to the contract breaks the rule, and so has no incentive to enforce it (rather, the opposite).

Unless, of course, Facebook is held accountable for not enforcing it.

bee_rider 2 days ago | parent | prev [-]

I don’t like the analogy because “hosting an event” is a fuzzy thing. If you are hosting an event with friends you might be able to rely on the shared values of your friends and the informal nature of the thing to enforce this sort of norm.

If you are a business that host events and your business model involves photos of the event, you should have a professional approach to knowing if people consented to have their photos shared, depending on the nature of the venue.

At this point it is becoming barely an analogy though.

SilasX 2 days ago | parent [-]

>I don’t like the analogy because “hosting an event” is a fuzzy thing. If you are hosting an event with friends you might be able to rely on the shared values of your friends and the informal nature of the thing to enforce this sort of norm.

You can't, though -- not perfectly, anyway. Whatever the informal norms, there are going to be people who violate them, and so the fault shouldn't pass on to you when you don't know someone is doing that. If anything, the analogy understates how unreasonable it is to FB, since they had an explicit contractual agreement for the other party not to send them sensitive data.

And as it stands now, websites aren't expected to pre-filter for some heuristic on "non-consensual user-uploaded photographs" (which would require an authentication chain), just to take them down when informed they're illegal ... which FB did (the analog of) here.

>If you are a business that host events and your business model involves photos of the event, you should have a professional approach to knowing if people consented to have their photos shared, depending on the nature of the venue.

I'm not sure that's the standard you want to base this argument on, because in most cases, the "professional approach" amounts to "if you come here at all, you're consenting to be photographed for publication, take it or leave it lol". FB had a stronger standard than this.

bee_rider 2 days ago | parent [-]

> I'm not sure that's the standard you want to base this argument on, because in most cases, the "professional approach" amounts to "if you come here at all, you're consenting to be photographed for publication, take it or leave it lol". FB had a stronger standard than this.

It depends on the event and the nature of the venue. But yes, it is a bad analogy. For one thing Facebook is not an event with clearly delineated borders. It should naturally be given much higher scrutiny than anything like that.

AtlasBarfed 2 days ago | parent | prev [-]

[flagged]

richwater 2 days ago | parent [-]

[flagged]

yibg 2 days ago | parent | prev [-]

I don't like to defend facebook either but where does this end? Does google need to verify each email it sends in case it contains something illegal? Or AWS before you store something in a publicly accessible S3 bucket?

AnotherGoodName 2 days ago | parent | next [-]

Here's one that we really don't want to acknowledge because it may give some sympathy towards Facebook (i do not work for them but am well aware of Cambridge Analytica);

Cambridge Analytica was entirely a third party using "Click here to log in via Facebook and share your contacts" via FB's OpenGraph API.

Everyone in their mind is sure that it was Facebook just giving away all user details and that's what the scandal was about but if you look at the details the company was using the Facebook OpenGraph API and users were blindly hitting 'share', including all contact details (allowing them to do targeted political campaigning) when using the Cambridge Analytica quiz apps. Facebooks fault was allowing Cambridge Analytica permission to that API (although at the time they granted pretty much anyone access to it since they figured users would read the popups).

Now you might say "a login popup that confirms you wish to share data with a third party is not enough" and that's fair. Although that pretty much describes every OAuth flow out there really. Also think about it from the perspective of any app that has a reasonable reason to share a contacts list. Perhaps you wish to make an open source calendar and have a share calendar flow? Well there's precedent that you're liable if someone misuses that API.

We all hate big tech. So do juries. We'll jump at the chance to find them guilty and no one else in tech will complain. But if we think about it for even a second quite often these precedents are terrible and stifling to everyone in tech.

filoeleven 2 days ago | parent [-]

> But if we think about it for even a second quite often these precedents are terrible and stifling to everyone in tech.

Doesn't everything else in your post kinda point to the industry needing a little stifling? Or, more kindly, a big rethink on privacy and better controls over one's own data?

Do you have an example of a similarly terrible precedent in your opinion? One that doesn't include the blatant surveillance state power-grabbing "think of the children" line. Just curious.

banannaise 2 days ago | parent | prev [-]

Ideally, it ends with Facebook implementing safeguards on data that could be illegal to use, and having a compliance process that rejects attempts to access that data for illegal reasons.

prasadjoglekar 2 days ago | parent | prev | next [-]

Flo shouldn't have sent those data to FB. That's true. Which is why they settled.

But FB, having received this info proceeded to use it and mix it with other signals it gets. Which is what the complaint against FB alleged.

changoplatanero 2 days ago | parent [-]

I wish there was information about who at Facebook received this information and “used” it. I suspect it was mixed in with 9 million other sources of information and no human at Facebook was even aware it was there.

xnorswap 2 days ago | parent | next [-]

Is your argument that it's fine to just collect so much information that you can't possibly responsibly handle it all?

In my opinion, that isn't something that should be allowed or encouraged.

maccard 2 days ago | parent [-]

I’m not the OP but no, I think their point is if you tell people that this data will be used for X, and not to send sensitive data that way and they do it anyway you can’t really be responsible for it - the entity who sent you the data and ignored your terms should be

const_cast 15 hours ago | parent [-]

Or you can both be partially responsible, with most of the responsibility on those who sent it.

dehrmann 2 days ago | parent | prev | next [-]

Not at Facebook, but I used to work on an ML system that took well-defined and free-form JSON data and ran ML on it. Both were used in training and classification. Unless a human looked, we had no idea what those custom fields were. We also had customers lie about what the fields represent for valid and less valid reasons.

Without knowing how it works at Facebook, it's quite possible the data points got slurped in, the models found meaning in the data and acted on it, and no human knew anything about it.

jdashg 2 days ago | parent [-]

How it happened internally is irrelevant to whether Facebook is responsible. Deploying systems they do not properly control or understand does not shield against legal or normal responsibilities!

There is a trail of people who signed off on this implementation. It is the fault of one or more people, not machines.

gruez 2 days ago | parent [-]

>Deploying systems they do not properly control or understand does not shield against legal or normal responsibilities!

We can argue the "moral" aspect until we're both blue in the face, but did facebook have any legal responsibilities to ensure its systems didn't contain sensitive data?

Espressosaurus 2 days ago | parent | prev | next [-]

So they shouldn’t be punished because they were negligent? Is that your argument?

pc86 2 days ago | parent [-]

I think their argument is that FB has a pipeline that processes whatever data you give it and the idea that a human being made the conscious decision to use this data is almost certainly not what happened.

"This data processing pipeline processed the data we put in the pipeline" is not necessarily negligence unless you just hate Facebook and couldn't possibly imagine any scenario where they're not all mustache-twirling villains.

qwertylicious 2 days ago | parent | next [-]

Yeah, sorry, no, I have to disagree.

We're seeing this broad trend in tech where we just want to shrug and say "gee wiz, the machine did it all on its own, who could've guessed that would happen, it's not really our fault, right?"

LLMs sharing dangerous false information, ATS systems disqualifying women at higher rates than men, black people getting falsely flagged by facial recognition systems. The list goes on and on.

Humans built these systems. Humans are responsible for governing those systems and building adequate safeguards to ensure they're neither misused nor misbehave. Companies should not be allowed to tech-wash their irresponsible or illegal behaviour.

If Facebook did indeed built a data pipeline and targeting advertising system that could blindly accept and monetize illegally acquired without any human oversight, then Facebook should absolutely be held accountable for that negligence.

pc86 2 days ago | parent [-]

What does the system look like where a human being individually verifies every pieces of data fed into an advertising system? Even taking the human out of the loop, how do you verify the "legality" of one piece of data vs. another coming from the same publisher?

None of your example have anything to do with the thing we're talking about, and are just meant to inflame emotional opinions rather than engender rational discussion about this issue.

qwertylicious 2 days ago | parent | next [-]

That's not my problem to solve?

If Facebook chooses to build a system that can ingest massive amounts of third party data, and cannot simultaneously develop a system to vet that data to determine if it's been illegally acquired, then they shouldn't build that system.

You're running under the assumption that the technology must exist, and therefore we must live with the consequences. I don't accept that premise.

Edit: By the way, I'm presenting this as an all-or-nothing proposition, which is certainly unreasonable, and I recognize that. KYC rules in finance aren't a panacea. Financial crimes still happen even with them in place. But they represent a best effort, if imperfect, attempt to acknowledge and mitigate those risks, and based on what we've seen from tech companies over the last thirty years, I think it's reasonable to assume Facebook didn't attempt similar diligence, particularly given a jury trial found them guilty of misbehaviour.

> None of your example have anything to do with the thing we're talking about, and are just meant to inflame emotional opinions rather than engender rational discussion about this issue.

Not at all. I'm placing this specific example in the broader context of the tech industry failing to a) consider the consequences of their actions, and b) escaping accountability.

That context matters.

myaccountonhn 2 days ago | parent | next [-]

I often think about what having accountability in tech would entail. These big tech companies only work because they can neglect support and any kind of oversight.

In my ideal world, platforms and their moderation would be more localized, so that individuals would have more power to influence it and also hold it accountable.

decisionsmatter 2 days ago | parent | prev [-]

It's difficult for me to parse what exactly your argument is. Facebook built a system to ingest third party data. Whether you feel that such technology should exist to ingest data and serve ads is, respectfully, completely irrelevant. Facebook requires any entity (e.g. the Flo app) to gather consent from their users to send user data into the ingestion pipeline per the terms of their SDK. The Flo app, in a phenomenally incompetent and negligent manner, not only sent unconsented data to Facebook, but sent -sensitive health data-. Facebook they did what Facebook does best, which is ingest this data _that Flo attested was not sensitive and collected with consent_ into their ads systems.

qwertylicious 2 days ago | parent [-]

So let's consider the possibilities:

#1. Facebook did everything they could to evaluate Flo as a company and the data they were receiving, but they simply had no way to tell that the data was illegally acquired and privacy-invading.

#2. Facebook had inadequate mechanisms for evaluating their partners, and that while they could have caught this problem they failed to do so, and therefore Facebook was negligent.

#3. Facebook turned a blind eye to clear red flags that should've caused them to investigate further, and Facebook was malicious.

Personally, given Facebook's past extremely egregious behaviour, I think it's most likely to be a combination of #2 and #3: inadequate mechanisms to evaluate data partners, and conveniently ignoring signals that the data was ill-gotten, and that Facebook is in fact negligent if not malicious. In either case Facebook should be held liable.

pc86 is taking the position that the issue is #1: that Facebook did everything they could, and still, the bad data made it through because it's impossible to build a system to catch this sort of thing.

If that's true, then my argument is that the system Facebook built is too easily abused and should be torn down or significantly modified/curtailed as it cannot be operated safely, and that Facebook should still be held liable for building and operating a harmful technology that they could not adequately govern.

Does that clarify my position?

decisionsmatter 2 days ago | parent | next [-]

No one is arguing that FB has not engaged in egregious and illegal behavior in the past. What pc86 and I are trying to explain is that in this instance, based on the details of the court docs, Facebook did not make a conscious decision to process this data. It just did. Because this data, combined with the billion+ data points that Facebook receives every single second, was sent to Facebook with the label that it was "consented and non-sensitive health data" when it most certainly was not consented and very sensitive health data. But this is the fault of Flo. Not Facebook.

You could argue that Facebook should be more explicit in asking developers to self-certify and label their data correctly, or not send it at all. You could argue that Facebook should bolster their signal detection when it receives data from a new apps for the first time. But to argue that a human at Facebook blindly built a system to ingest data illegally without any attempt to prevent it is a flawed argument, as there are many controls, many disclosures, and (I'm sure) many internal teams and systems designed exactly for the purpose of determining whether the data they receive is has the appropriate consents (which it did, that Flo sent to them). This case is very squarely #1 in your example and maybe a bit of #2.

2 days ago | parent | next [-]
[deleted]
ryandrake 2 days ago | parent | prev | next [-]

If FB is going to use the data, then it should have the responsibility to check whether they can legally use it. Having their supplier say "It's not sensitive health data, bro, and if it is, it's consented. Trust us" should not be enough.

To use an extreme example, if someone posts CSAM through Facebook and says "It's not CSAM, trust me bro" and Facebook publishes it, then both the poster and Facebook have done wrong and should be in trouble.

gruez 2 days ago | parent [-]

>To use an extreme example, if someone posts CSAM through Facebook and says "It's not CSAM, trust me bro" and Facebook publishes it, then both the poster and Facebook have done wrong and should be in trouble.

AFAIK that's only because of mandatory scanning laws for CSAM, which were only enacted recently. There's no such obligations for other sensitive data.

pixl97 2 days ago | parent | prev | next [-]

Mens rea vs actus reus.

In some crimes actus reus is what matters. For example if you're handling stolen goods (in the US) the law can repossess these goods and any gains from them, even if you had no idea they were stolen.

Tech companies try to absolve themselves of mens rea by making sure no one says anything via email or other documented process that could otherwise be used in discovery. "If you don't admit your product could be used for wrong doing, then it can't!"

shkkmo 2 days ago | parent | prev | next [-]

>Facebook did not make a conscious decision to process this data.

Yes, it did. When Facebook built the system and allowed external entities to feed it unvetted information without human oversight, that was a choice to process this data.

> without any attempt to prevent it is a flawed argument, as there are many controls, many disclosures, and (I'm sure) many internal teams and systems designed exactly for the purpose of determining whether the data they receive is has the appropriate consents

This seems like a giant assumption to make without evidence. Given the past bad behavior from Meta, they do not deserve this benefit of the doubt.

If those systems exist, they clearly failed to actually work. However, the court documents indicate that Facebook didn't build out systems to check if stuff is health data until afterwards.

Capricorn2481 2 days ago | parent | prev [-]

> Facebook did not make a conscious decision to process this data. It just did.

What everyone else is saying is what they did is illegal, and they did it automatically, which is worse. What you're describing was, in fact, built to do that. They are advertising to people based on the honor system of whoever submits the data pinky promising it was consensual. That's absurd.

changoplatanero 2 days ago | parent | prev [-]

"doing everything they could" is quite the high standard. Personally, I would only hold them to the standard of making a reasonable effort.

qwertylicious 2 days ago | parent [-]

Yup, fair. I tried to acknowledge that in my paragraph about KYC in a follow-up edit to one of my earlier comments, but I agree, the language I've been using has been intentionally quite strong, and sometimes misleadingly so (I tend to communicate using strong contrasts between opposites as a way to ensure clarity in my arguments, but reality inevitably lands somewhere in the middle).

const_cast 15 hours ago | parent | prev [-]

> What does the system look like where a human being individually verifies every pieces of data fed into an advertising system?

Probably what it looked like 20 years ago.

Also, relaredly, if there's no moral or ethical way to conduct your business model, that doesn't mean that you're off the hook.

The correct outcome is your business model burns to the ground. That's why I don't run a hitman business, even though it would be lucrative.

If mass scale automated targeted advertising cannot be done ethically, then it cannot be done at all. It shouldn't exist.

const_cast 15 hours ago | parent | prev | next [-]

A human was involved somewhere, maybe they made the pipeline.

You can't, or shouldn't, outsource responsibility to computers. You can give them tasks. But if computers fail the tasks you give them, that's your responsibility. Because code doesn't have a conscious or an understanding of morality.

bee_rider 2 days ago | parent | prev | next [-]

It is necessarily negligence if they are ingesting a lot of illegal data, right? I mean, it could be the case that this isn’t a business model that works given typical human levels of competence.

But working beyond your competence when it results in people getting hurt is… negligent.

ratelimitsteve 2 days ago | parent | prev [-]

You're absolutely right, a human being didn't make the conscious decision to use this data. They made a conscious decision to build an automated pipeline that uses this data and another conscious decision not to build in any checks on the legitimacy of said data. Do we want the law to encourage responsibility or intentional ignorance and plausible deniability?

ramonga 2 days ago | parent | prev | next [-]

I would expect an app with 150 million active users to trigger some kind of compliance review in Meta

Capricorn2481 2 days ago | parent | prev [-]

This is the argument companies use for having shitty customer support. "Our business is too big for our small support team."

Why are you scaling up a business that can't refrain from fucking over customers?

bluGill 2 days ago | parent | prev | next [-]

I would say you have a responsibility to ensure you are getting legal data. you don't buy stolen things. That is meta has a reponsibility to ensure that they are not partnering with crooks. Flo gets the largest blame but meta needs to show they did their part to ensure this didn't happen. (I would not call terms of use enough unless they can show they make you understand it)

gruez 2 days ago | parent | next [-]

>Flo gets the largest blame but meta needs to show they did their part to ensure this didn't happen. (I would not call terms of use enough unless they can show they make you understand it)

Court documents says that they blocked access as soon as they were aware of it. They also "built out its systems to detect and filter out “potentially health-related terms.”". Are you expecting more, like some sort of KYC/audit regime before you could get any API key? Isn't that the exact sort of stuff people were railing against, because indie/OSS developers were being hassled by the play store to undergo expensive audits to get access to sensitive permissions?

hedgehog 2 days ago | parent | next [-]

Facebook chose to pool the data they received from customers and allow its use by others, so they are also responsible for the outcomes. If it's too hard to provide strong assurance that errors like Flo's won't result in adverse outcomes for the public, perhaps they should have designed a system that didn't work that way.

gruez 2 days ago | parent [-]

>Facebook chose to pool the data they received from customers and allow its use by others, so they are also responsible for the outcomes.

"chose" is doing a lot of the heavy lifting here. Suppose you ran a Mastodon server and it turned out some people were using it to share revenge porn unbeknownst to you. Suppose further that they did it in a way that didn't make it easily detectable by you (eg. they did it in DMs/group chats). Sure, you can dump out the database and pore over everything just to be sure, but it's not like you're going to notice it day to day. If a few months later the revenge porn ring got busted should you be charged with "intentionally eavesdropping" on revenge porn or whatever? After all, to some extent, you "chose" to run the Mastodon server.

hedgehog a day ago | parent [-]

Transmitting messages between users is a functional property of Mastodon that is of course visible and valuable to the users. Transmitting protected health data from Flo users to anyone with a dollar to buy some ads is not a functional property of Flo itself or a mobile ad product, and likely surprising to both Flo and Flo's users. Facebook has discretion on how they use that data. If this is a rare and unavoidable consequence of their business model Facebook should be comfortable paying the settlements as judgements occur.

bluGill a day ago | parent | prev [-]

Details matter. Sometimes blocking as soon as you are aware of it is enough, sometimes enough. Those "systems to detect and filter out “potentially health-related terms.”" need to be examined in depth - are they enough, were they done only after the fact when they should have been more proactive?

Not knowing those details (they are probably available but I'm not interested enough to read the court documents) I'm going to defer to the courts on this. Understand that depending on ongoing appeals I may have to change my stance a few times. If this keeps coming up I may eventually have to get interested and learn more details so I can pressure my representative to change the laws, but for now this just isn't important enough - to me - to dig farther than the generalizations I made above.

hennell 2 days ago | parent | prev | next [-]

I have the type of email address that regularly receives email meant for other people with a similar name. Invites, receipts, and at one point someones Disney+ account.

At one point I was getting a strangers fertility app updates - didn't know her name, but I could tell you where she was in her cycle.

I've also had NHS records sent to me, again entirely unsolicited, although that had enough I could find who it was meant for and inform them of the data breach.

I'm no fan of facebook, but I'm not sure you can criminalise receiving data, you can't control what others send you.

WhyCause 2 days ago | parent [-]

> ...you can't control what others send you.

Of course not. You can, however, control what you then do with said data.

If a courier accidentally dropped a folder full of nuclear secrets in your mailbox, I promise you that if you do anything with it other than call the FBI (in the US), you will be in trouble.

gruez 2 days ago | parent [-]

Except in this case it's unclear whether any intentional decision went on at meta. A better analogy would be if someone sent you a bunch of CSAM, it went to your spam folder, but then because you have backups enabled the CSAM got replicated to 3 different servers across state lines, and the FBI is charging you with "distributing" CSAM.

deadbabe 2 days ago | parent | prev | next [-]

If Flo accepted the terms of use, then it means they understand it.

Really the only blame here should be on Flo.

richwater 2 days ago | parent | prev [-]

> you don't buy stolen things.

This happens accidentally every single day and we don't punish the victim

bluGill 2 days ago | parent [-]

We do punish the victum - we take away stolen goods. if they know it was stolen goods they can be punished for it. money laundy laws get a lot of innocent people doing legal things.

HeavyStorm 2 days ago | parent | prev | next [-]

That's why in these cases you'd prefer a judgment without a jury. Technical cases like this will always confuse jurors, who can't be expected to understand details about sdk, data sharing, APIs etc.

On the other hand, in a number of highprofile tech cases, you can see judges learning and discussing engineering in a deeper level.

zahlman 2 days ago | parent | next [-]

> Technical cases like this will always confuse jurors... On the other hand, in a number of highprofile tech cases, you can see judges learning and discussing engineering in a deeper level.

Not to be ageist, but I find this highly counterintuitive.

pc86 2 days ago | parent | next [-]

Judges aren't necessarily brilliant, but they do spend their entire careers reading, listening to, and dissecting arguments. A large part of this requires learning new information at least well enough to make sense of arguments on both sides of the issue. So you do end up probably self-selecting for older folks able to do this better than the mean for their age, and likely for the population at large.

Let's just say with a full jury you're almost guaranteed to get someone on the other side of the spectrum, regardless of age.

willsmith72 2 days ago | parent | prev | next [-]

how exactly? you expect the average joe to have a better technical understanding, and more importantly ability to learn, than a judge? that is bizarre to me

zahlman 2 days ago | parent [-]

I expect the average joe to use technology much more than a judge.

BobaFloutist 2 days ago | parent | prev [-]

The judge is at their job. The jury is conscripts that are often paying a financial penalty to be present.

mulmen 2 days ago | parent [-]

Weird deference to authority

BobaFloutist 19 hours ago | parent [-]

What? No, I think jury trials are very, very important, for all their flaws, but I don't think it demonstrates deference to authority to say that it makes sense that your average judge is likely to be more invested in your average trial than your average jury, for the same reason that volunteer militaries tend to have more dedicated soldiers than conscript militaries.

The judge decided to pursue this career, studied law, is fairly well paid for the position, and has nowhere better to be. The jury is likely losing pay, is worried about parking and where and when they're going to eat lunch, and probably just wants the trial to be over (for all that they would of course prefer the outcome to be correct) so they can go home and return to their normal lives.

mulmen 16 hours ago | parent [-]

> I don't think it demonstrates deference to authority to say that it makes sense that your average judge is likely to be more invested in your average trial than your average jury

First off averages aren’t good enough here.

Second I can’t imagine a better example of an appeal to authority.

Trials are an administrative action. Interest in the process is no indication that the outcome will be just.

Your idea falls apart for all of the reasons an appeal to authority is a fallacy.

Additionally you’re fundamentally changing the nature of society by holding the people accountable to power rather than each other.

dehrmann 2 days ago | parent | prev | next [-]

I've also heard you want a judge trial if you're innocent, jury if you're guilty. A judge will quickly see through prosecutorial BS if you didn't do it, and if you did, it only takes one to hang.

echoangle 2 days ago | parent | prev | next [-]

Is it easier for the prosecution to make the jury think Facebook is guilty or for Facebook to make the jury think they are not? I don’t see why one would be easier, except if the jury would be prejudiced against Facebook already. Or is it just luck who the jury sides with?

dylan604 2 days ago | parent | next [-]

I'd imagine Facebook looking for any potential juror in tech to be dismissed as quickly as possible while the prosecution would be looking to seat as many tech jurors they can luck their way into seating.

azemetre 2 days ago | parent | prev [-]

I mean it totally depends what your views on democracy are. Juries are one of the few, likely only, practices taken from Ancient Athenian democracy which was truly led by the people. The fact that juries still work this way is a testament to the practice.

With this in mind, I personally believe groups will always come to better conclusions than individuals.

Being tried by 12 instead of 1 means more diversity of thought and opinion.

tracker1 2 days ago | parent [-]

I mostly agree here, but would add there's definitely a social pressure to go along with the group a lot of the time, even in jury trials. How many people genuinely have the fortitude to stand up to a group of 10+ others with a countering pov.

azemetre a day ago | parent [-]

I don't disagree, but think of the pressures a judge has as an individual as well. Pressures from the legal community, the electorate, and being seen as impartial.

There is a wisdom of the crowd, and that wisdom comes in believing that we are all equal under the law. This wisdom is more self evident in democratic systems, like juries.

mrkstu 2 days ago | parent | prev | next [-]

My understanding is defendants always get to choose, no? So that was an available option they chose not to avail themselves to.

at-fates-hands 2 days ago | parent | prev [-]

>> Technical cases like this will always confuse jurors.

This has been an issue since the internet was invented. Its always been the duty of the lawyers on both sides to present the information in cases like this in a manner that is understandable to the jurors.

I distinctly remember during the OJ case, there were many issues that the media said most likely were presented in such a detailed manner, many in the jurors seemed to be checked out. At the time, the prosecution spent days just on the DNA evidence. In contrast, the defense spent days just on how the LAPD collected evidence at the crime scene with the same effect, that many on the jury seemed to check out the deeper the defense dug into it.

So it not just technical cases, any kind of court case that requires a detailed understanding of anything complex comes down to how the lawyers present it to the jury.

benreesman 2 days ago | parent | prev | next [-]

I tend to agree in this instance. But this is why you don't build a public brand of doing shit very much like this constantly.

Innocent until proven guilty is the right default, but at some point when you've been accused of misconduct enough times? No jury is impartial.

2 days ago | parent | prev | next [-]
[deleted]
nikanj 2 days ago | parent | prev | next [-]

Suing Facebook instead of Flo makes perfect sense, because Facebook has much more money. Plus juries are more likely to hate FB than a random menstruation company.

mattmcknight 2 days ago | parent [-]

They sued both.

1oooqooq 2 days ago | parent | prev [-]

[flagged]

2 days ago | parent | next [-]
[deleted]
zahlman 2 days ago | parent | prev [-]

> Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.

> When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."

> Please don't fulminate. Please don't sneer, including at the rest of the community.