| |
| ▲ | thinkingtoilet 2 days ago | parent | next [-] | | Should large corporations be able to break the law because it's too hard for them to manage their data? Should they be immune from law suits because actively moderating their product would hurt their business model? Does Facebook have a right to exist? You know exactly what it would look like. It would look like Facebook being legally responsible for using the data they get. If they are too big to do that or are getting too much data to do that, the answer isn't to let them off the hook. Also, lets not pretend Facebook doesn't have a 15 year history of actively misusing data. This is not a one off event. | | |
| ▲ | gruez 2 days ago | parent [-] | | >Should large corporations be able to break the law because [...] No, because this is begging the question. The point being disputed is whether facebook offering a SDK and analytics service counts as "intentionally eavesdropping". Anyone with a bit of understanding of how SDKs work should think it's not. If you told your menstrual secrets to a friend, and that friend then told me, that's not "eavesdropping" to any sane person, but that's essentially what the jury ruled here. I might be sympathetic if facebook was being convicted of "trafficking private information" or whatever, but if that's not a real crime, we shouldn't be using "intentionally eavesdropping" as a cudgel against it just because we hate it. That goes against the whole concept of rule of law. |
| |
| ▲ | jjulius 2 days ago | parent | prev | next [-] | | >What do you think this should look like? My honest answer that I know is impossible: Targeted advertising needs to die entirely. | | |
| ▲ | const_cast 16 hours ago | parent [-] | | I don't think it's impossible. If it's too hard to do something legally, then the solution is don't do it. Like, for example, running a gambling operation is very risky and has a high compliance barrier. So most companies just don't. In fact, most B2B won't even sell to gambling companies depending on what exactly they're selling. |
| |
| ▲ | banannaise 2 days ago | parent | prev | next [-] | | > What do you think this should look like? Institutions that handle sensitive data that is subject to access regulations generally have a compliance process that must be followed prior to accessing and using that data, and a compliance department staffed with experts who review and approve/deny access requests. But Facebook would rather move fast, break things, pay some fines, and reap the benefits of their illegal behavior. | | |
| ▲ | gruez 2 days ago | parent [-] | | >Institutions that handle sensitive data that is subject to access regulations generally have a compliance process that must be followed prior to accessing and using that data, and a compliance department staffed with experts who review and approve/deny access requests. Facebook isn't running an electronic medical records business. It has no expectation that it's going to be receiving sensitive data, and specifically discourages it. What more are you expecting? That any company dealing with bits should have a moderation team poring over all records to make sure they don't contain "sensitive data"? >But Facebook would rather move fast, break things, pay some fines, and reap the benefits of their illegal behavior. Running an analytics service that allows apps to send arbitrary events is "move fast, break things" now? | | |
| ▲ | const_cast 16 hours ago | parent | next [-] | | Wether you are a medical records processing service doesn't depend on self-identity, it depends on if you process medical data. Evidently Facebook does use medical data for targeted advertising. So they are a medical records business. | |
| ▲ | whstl a day ago | parent | prev [-] | | Is this a simple hosted analytics service, where outputs are only accessible by Flo, or does Facebook uses this data in any other meaningful way? If this is used by targeting, I’m afraid we can’t call this just an “analytics service”. |
|
| |
| ▲ | SilasX 2 days ago | parent | prev | next [-] | | Yeah, I'm not sure if I'm missing something, and I don't like to defend FB, but ... AIUI, they have a system for using data they receive to target ads. They tell people not to put sensitive data in it. Someone does anyway, and it gets automatically picked up to target ads. What are they supposed to do on their end? Even if they apply heuristics for "probably sensitive data we shouldn't use"[1], some stuff is still going to get through. The fault should still lie with the entity that passed on the sensitive data. An analogy might be that you want to share photos of an event you hosted, and you tell people to send in their pics, while enforcing the norm, "oh make sure to ask before taking someone's photo", and someone insists that what they sent in was compliant with that rule, when it wasn't. And then you share them. [1] Edit: per your other comment, they indeed had such heuristics: https://news.ycombinator.com/item?id=44901198 | | |
| ▲ | jlarocco 2 days ago | parent | next [-] | | It doesn't work like that, though. Companies don't get to do whatever they want just because they didn't put any safegaurds in place to prevent illegally using the data they collected. The correct answer is to look at the data and verify it's legal to use. I might be sympathetic of a tiny startup who has increased costs, but it's a cost of doing business just like anything else. And Facebook has more than enough resources to put safegaurds in place, and they definitely should have known better by now, so they should get punished for not complying. | | |
| ▲ | SilasX 2 days ago | parent [-] | | > The correct answer is to look at the data and verify it's legal to use. So repeal Section 230 and require every site to manually evaluate all content uploaded for legality before doing anything with it? If it’s not reasonable to ask sites to do that, it’s not reasonable to ask FB to do the same for data you send them. Your position seems to vary based on how big/sympathetic the company in question is, which is not very even-handed and implicitly recognizes the burden of this kind of ask. | | |
| ▲ | const_cast 16 hours ago | parent [-] | | Not before doing anything with it, just before processing it for specific business use cases like targeting. Running a forumn is fine, and I don't care if someone inputs a fake SSN on a forumn post. I DO care if someone inputs a fake SSN on a financial form I provided, and it is actually my responsibility to prevent that. That's what KYC is and more. |
|
| |
| ▲ | mlyle 2 days ago | parent | prev | next [-] | | The problem is, the opposite approach is... "We're scot free, because we told *wink* people to not sell us sensitive data. We get the benefit from it, and we make it really easy for people to sign up and get paid to give us this data that we 'don't want.'" Please don't sell me cocaine *snifffffffff* > The fault should still lie with the entity that passed on the sensitive data. Some benefits to making it be both: * Centralize enforcement with more knowledgable entities * Enforce at a level where the misdeeds can actually be identified and have scale, rather than death from a million cuts * Prevent the central entity from using deniable proxies and cut-throughs to do bad things This whole notion that we want so much scale, and that scale is an excuse for not paying attention to what you're doing or exercising due diligence, is repugnant. It pushes some cost down but also causes a lot of social harm. If anything, we should expect more ownership and responsibility from those with concentrated power, because they have more ability to cause widescale harm. | | |
| ▲ | gruez 2 days ago | parent [-] | | >"We're scot free, because we told wink people to not sell us sensitive data. We get the benefit from it, and we make it really easy for people to sign up and get paid to give us this data that we 'don't want.'" >Please don't sell me cocaine snifffffffff Maybe there's something in discovery that substantiates this, but so far as I can tell there's no "wink" happening, officially or unofficially. A better analogy would be charging amazon with drug distributing because some enterprising drug dealer decided to use FBA to ship drugs, but amazon was unaware. | | |
| ▲ | mlyle a day ago | parent [-] | | Facebook gets a fiscal benefit when the counterparty to the contract breaks the rule, and so has no incentive to enforce it (rather, the opposite). Unless, of course, Facebook is held accountable for not enforcing it. |
|
| |
| ▲ | bee_rider 2 days ago | parent | prev [-] | | I don’t like the analogy because “hosting an event” is a fuzzy thing. If you are hosting an event with friends you might be able to rely on the shared values of your friends and the informal nature of the thing to enforce this sort of norm. If you are a business that host events and your business model involves photos of the event, you should have a professional approach to knowing if people consented to have their photos shared, depending on the nature of the venue. At this point it is becoming barely an analogy though. | | |
| ▲ | SilasX 2 days ago | parent [-] | | >I don’t like the analogy because “hosting an event” is a fuzzy thing. If you are hosting an event with friends you might be able to rely on the shared values of your friends and the informal nature of the thing to enforce this sort of norm. You can't, though -- not perfectly, anyway. Whatever the informal norms, there are going to be people who violate them, and so the fault shouldn't pass on to you when you don't know someone is doing that. If anything, the analogy understates how unreasonable it is to FB, since they had an explicit contractual agreement for the other party not to send them sensitive data. And as it stands now, websites aren't expected to pre-filter for some heuristic on "non-consensual user-uploaded photographs" (which would require an authentication chain), just to take them down when informed they're illegal ... which FB did (the analog of) here. >If you are a business that host events and your business model involves photos of the event, you should have a professional approach to knowing if people consented to have their photos shared, depending on the nature of the venue. I'm not sure that's the standard you want to base this argument on, because in most cases, the "professional approach" amounts to "if you come here at all, you're consenting to be photographed for publication, take it or leave it lol". FB had a stronger standard than this. | | |
| ▲ | bee_rider 2 days ago | parent [-] | | > I'm not sure that's the standard you want to base this argument on, because in most cases, the "professional approach" amounts to "if you come here at all, you're consenting to be photographed for publication, take it or leave it lol". FB had a stronger standard than this. It depends on the event and the nature of the venue. But yes, it is a bad analogy. For one thing Facebook is not an event with clearly delineated borders. It should naturally be given much higher scrutiny than anything like that. |
|
|
| |
| ▲ | AtlasBarfed 2 days ago | parent | prev [-] | | [flagged] | | |
|