Remix.run Logo
Meta in row after workers who saw smart glasses users having sex lose jobs(bbc.com)
390 points by gorbachev 3 hours ago | 226 comments
gorbachev 3 hours ago | parent | next [-]

Meta cancels the contract with the outsourcing company they contracted to classify smart glasses content after employees at the company whistleblow about serious privacy issues with the content they were paid to classify.

SlinkyOnStairs 2 hours ago | parent | next [-]

"Fun" bonus fact: This isn't the first time Sama (the outsourcing company) has had these problems.

OpenAI had them classify CSAM, so Sama fired them as a client back in 2022. https://time.com/6247678/openai-chatgpt-kenya-workers/

We're 4 years on, 3 years since that report broke. Not a single thing has improved about how tech companies operate.

prepend 2 hours ago | parent | next [-]

How else do you want companies to remove and prevent CSAM? It seems like you must have some human involvement to train and monitor.

It’s a terrible job, I wouldn’t want to do it, but someone needs to. Perhaps one day, AI will be accurate enough to not need it, but even then you need someone to process complaints and waivers (like someone’s home photos being inaccurately flagged).

SlinkyOnStairs an hour ago | parent | next [-]

> How else do you want companies to remove and prevent CSAM?

Different situation.

Facebook has to do CSAM moderation because it's a publishing platform. People will post CSAM on facebook, so they must do moderation.

And "just don't have facebook" isn't a solution because every publication of any sort has to deal with this problem; Any newspaper accepting mail has this problem. (Albeit to a much more scaled down version) People were nailing obscene things to bulletin boards for all recorded history.

---

In contrast, OpenAI has no such problem. It did not have CSAM pushed onto it, it actively collected such data itself. It could have, at any point before and after, simply stopped scraping all of the web indiscriminately and switched to using more curated sources of scraped data.

The downside would be "worse LLMs" or "LLMs being created later", which is a perfectly acceptable compromise.

---

This is not to say that genuine content flagging firms have no reason to curate such data & build tools to automatically flag content before human moderators have to. (But then they also shouldn't be outsourcing this and traumatizing contract workers for $2-3 an hour)

But OpenAI is not such a firm. It's a general AI company.

GrinningFool an hour ago | parent | next [-]

> traumatizing contract workers for $2-3 an hour)

Is there an hourly rate at which this should be acceptable?

genewitch 37 minutes ago | parent [-]

triage doctors, what do they make? give people who have to review the worst humanity has to offer and pay them that. and while we're at it, ambulance personnel should get a huge pay bump. Take it from nurses' pay.

jdiff 9 minutes ago | parent [-]

Why take from other workers when it can be siphoned from upper management and shareholders?

deaux 15 minutes ago | parent | prev | next [-]

> In contrast, OpenAI has no such problem. It did not have CSAM pushed onto it, it actively collected such data itself. It could have, at any point before and after, simply stopped scraping all of the web indiscriminately and switched to using more curated sources of scraped data.

This is of course incredibly illegal, but megacorps (by valuation) and oligarchy members are above the law so who cares. I assume there could be a regulatory framework which can make this legal for an extremely specific purpose, but there is zero change that OpenAI was part of this/abiding by this in 2022, absolutely none.

fragmede 22 minutes ago | parent | prev [-]

OpenAI runs ChatGPT where users submit text and photos and OpenAI generates and sends text and photos back. So users could be submitting CSAM. And yes, OpenAI could be generating CSAM. It's not limited to being a pull operation. What am I missing?

abdullahkhalids 2 hours ago | parent | prev | next [-]

CSAM exists on social media because they are so large that it's not possible to moderate them effectively. To me this is a a no-go. If a business is so large that it cannot respect laws, it needs to be shut down.

The correct way to organize social media is in federated way. Each server only holds on average a few hundred or few thousand people. Server moderators should be legally responsible for content on their server. CSAM on social media will be 100x suppressed because banning people is way easier on small servers.

Not many moderators will have to look at CSAM because the structure of the system makes is unappealing to even try sharing CSAM, knowing you will be immediately blocked.

devilbunny an hour ago | parent | next [-]

> Server moderators should be legally responsible for content on their server.

And therefore anything that is remotely questionable will be blocked. Not just kiddie porn. Pissed off a local business with a bad review? Blocked.

Child abusers are twisted people, and I really don’t care much what happens to them, but making it impossible for them to use the internet means sterilizing the whole thing.

abdullahkhalids 34 minutes ago | parent [-]

You are just saying that physical life doesn't function. People get banned or removed from all sorts of informal and formal groups all the time because of completely illegitimate reasons. That's just human politics embedded so deeply in our psychology it will never go away. They simply move to different groups - and similarly online they can move to a different federated server.

But that's not possible in today's oligopoly of social media. An invisible algorithm will ban you, and there is no way back, and few alternates. Big Social Media is way worse from a sanitizing perspective than some federated social media.

haritha-j an hour ago | parent | prev | next [-]

Also, if you've gone from zero to one of the biggest coroporations in the country, and have billions to throw at the 'metaverse', I find it hard to believe that removing CSAM is where you struggle.

abdullahkhalids 14 minutes ago | parent | next [-]

No. It's a legitimately difficult problem because there not all naked pictures of kids are illegal. The false positive problem is bad for business, but also generally bad even if the big social media was benevolent.

Moderators need to actually understand the context of the picture/video, which requires knowledge of culture and language of the people sharing the pictures. It's really difficult to do that without hiring moderators from every culture in the world.

But small federated servers can often align along real world human social networks, so it's easier for the server admin to understand what should be removed.

GrinningFool an hour ago | parent | prev [-]

Isn't this more about disincentivizing the posting of it in the first place by increasing the chances of getting banned? Once you have to remove it, it's too late.

2ndorderthought an hour ago | parent | prev | next [-]

Yep. If you cannot both safely and legally provide the thing you are selling you are no longer a legitimate company you are a criminal enterprise profiting off of exploitation.

esyir an hour ago | parent [-]

If car manufacturers cannot bring car related deaths to zero, they too should no longer be legitimate companies.

lokar an hour ago | parent | next [-]

A better comparison would be that if a car company can’t meet preexisting crash/safety standards, they need to shut down.

These are pretty clear laws established by a democratic government with a pretty good record for rule of law.

esyir an hour ago | parent [-]

Sure, then they can go demand said standards for social media platforms including expected amount per N post, just as car companies are not expected to have car fatality rates be 0.

The fact is that simple scale means that there will always be something, no matter how abhorrent. Small scale doesn't change this, it just concentrates it.

2ndorderthought an hour ago | parent | prev | next [-]

Do car companies sell cars without air bags, or seat belts? What about cars that haven't been crash tested? What happens to them if they don't do this do you think?

Would you drive a car optimized for profit that didn't have those safety features? How about on a highway? Daily?

esyir an hour ago | parent [-]

We're talking about CSAM right? Which all platforms remove proactively, build models to remove and essentially always respond to when informed.

Demanding some perfect immediate magic response there is the equivalent of asking car manufacturers to prevent all deaths.

2ndorderthought an hour ago | parent [-]

Do they remove it and respond really though?

https://arstechnica.com/tech-policy/2026/01/x-blames-users-f...

Here it's said that it's the users fault. I disagree. Completely. Most of these companies, staying on topic many of these companies have laid off the employees who tried to prevent things like this,

https://www.cnbc.com/2025/10/22/meta-layoffs-ai.html

https://www.zdnet.com/article/us-ai-safety-institute-will-be...

https://www.lesswrong.com/posts/dqd54wpEfjKJsJBk6/xai-s-grok...

The list of not even trying anymore goes on and on. Mechahitler was also fun

_DeadFred_ an hour ago | parent | prev [-]

When FORD dngaf with the Pinto and Corsair( like tech companies do not gaf), they deservedly got this same level of contempt/demand for oversite. A dude named Ralph Nader went on a huge crusade about it. And they got a ton more oversite, safety requirements, etc put on them.

So yes, yes, let's do like we did with cars.

genewitch 35 minutes ago | parent [-]

I voted for Ralph Nader a few times, until he stopped appearing on ballots for whatever reason. For this reason, and many others. I don't remember any negative press about him, either. maybe he got out when mudslinging became defacto in elections.

Yokohiii an hour ago | parent | prev | next [-]

I am not sold on the federated thing to solve CSAM or similar issues.

Actually companies should be bullied about privacy and copyright so they are unable to share any contents at a scale with 3rd parties. Thus they have to solve it on their own and forced to realize their business model is shit.

Aurornis an hour ago | parent | prev | next [-]

> Server moderators should be legally responsible for content on their server.

So if you want to send someone to jail, just talk your way into joining their server, upload some illegal content, and report them for it?

> Not many moderators will have to look at CSAM because the structure of the system makes is unappealing to even try sharing CSAM, knowing you will be immediately blocked.

Why would someone join a server with active moderation if they wanted to share CSAM with their social media friends?

They would seek out one of those servers that was set up specifically for those groups, where it was known to be a safe space.

This is what many people don't get about federated networks: The people in those little servers DGAF if you block them. They want to be surrounded by their likeminded friends away from the rules of some bigger service like Facebook or Twitter. Federated social media is the perfect platform for them because they can find someone who set up a server in some other country with their own idea of rules and join that, not be subject to the regulations of mainstream social media.

genewitch 28 minutes ago | parent [-]

right, and you have other users on fediverse that notice that server leaking, and if the content is bad enough, report the service to an authority. Having all of the pedophiles and other creeps on a tiny subset of servers, isloated islands of them; well, that ought make enforcement easier.

It also makes it relatively easy to avoid, as server admins share blocklists. I know a dozen servers offhand that i'd block if i ran another fediverse server.

Fosstodon fediverse server doesn't have this issue, for example.

I replied this way because the way you wrote it, it sounds like an indictment of a system that's designed to avoid advertisers getting user profiles, over all else.

The problem is the people who participate in this, and not "the network."

Barrin92 28 minutes ago | parent | prev | next [-]

>CSAM on social media will be 100x suppressed because banning people is way easier on small servers.

No it isn't. Small servers often don't have paid security or moderation, are run in anonymous fashion, and have no profit motive that can even be used to incentivize them against hosting illegal content.

That's visible when it comes to porn. There's a million bootleg porn sites on the internet hosted that show off illegal content. The only site that was ever forced to curate its content was Pornhub, because they're sufficiently large, work in a jurisdiction that has laws and can be held accountable. From a content moderation standpoint going after a million web forums is an absolute pain in the ass compared to going after Facebook.

Which is the first argument any decentralization advocate always brings up (and they're correct to do so), censorship is harder and evasion of law enforcement easier when dealing with a network of independent actors.

devmor an hour ago | parent | prev | next [-]

The one thing I will throw out here that I can add to this conversation is that I think the government simply does not care, either. It's mainly only in regard to mass public outrage, or when someone is a political target that it gets dealt with from a law enforcement level.

Anecdotally, when I was a young adult I was a volunteer moderator for a large forum. We got reports of CSAM several times a month and had a process for escalating and reporting it to the FBI IC3 - we retained a lot of information about the users that posted it.

One of the administrators of the website mentioned to me that over the years since the inception of the forum, they'd reported almost a thousand incidents of CSAM distribution - and the FBI followed up with them to get information less than 10 times in total.

devilbunny an hour ago | parent [-]

That seems reasonable though. The FBI isn’t interested in busting one perv in a closet, they want the ones making the stuff.

muglug an hour ago | parent | prev [-]

> Banning people is way easier on small servers

Big “citation needed” here. My bet is that Meta have far better moderation systems than any other social media company on the planet.

genewitch 15 minutes ago | parent [-]

when i ran a fediverse server for myself and 3 people, but allowed public signups if someone came by; it was very easy to ban people, and very easy to null-route entire swaths of the fediverse, because i didn't want their content on my service.

That's more what i got from that pull-quote. I know a company that has hundreds of individual forums, and those are all moderated quickly and correctly (last i heard). They're moderated so effectively they often get DDoS by Russian IPs for banning users for scam posts from that country.

Yokohiii an hour ago | parent | prev | next [-]

These workers prepare data for AI. I don't think the need for them will go away anyway soon.

Westeners are too expensive and unwilling to do it. AI is a business model that requires poverty and extreme inequality to function. Yes other businesses do that too, but they don't claim it's a solution to everything while it actually has very special human requirements.

IncreasePosts an hour ago | parent | prev [-]

Couldn't you just use multiple classifiers? Like "is a minor" classifier coupled with "is sexual content" classifier?

superfrank 4 minutes ago | parent [-]

How would you test that that works?

deaux 17 minutes ago | parent | prev | next [-]

> Sama (the outsourcing company)

If script writers gave the company this name in a fictionalization it would be rejected as too on the nose.

cyanydeez 2 hours ago | parent | prev [-]

Isn't it more that tech companies are just more high profile and integral to political and social landscape than older companies; but reviewing the current political zeitgeist, they're in lockstep to what some, if not all, would just call fascism?

2ndorderthought an hour ago | parent | next [-]

They are literal defense and offense contractors. They hang out at the Pentagon. They sell political data to sway elections. They give gifts to leaders for favors. It is technofacism.

intended an hour ago | parent | prev | next [-]

Yes and no.

Safety and user pain is a part of tech which seems largely ignored, even on sites like HN.

I really have no idea why this ignorance prevails; commenters seem to genuinely be unaware of what goes on in Trust and Safety processes.

I mean, most users would complain about content moderation, but their experience would be miles ahead of what most of humanity enjoys when it comes to responsiveness.

I believe this lack of knowledge, examples, and case history is causing a blind spot in tech centric conversations when it comes to the causes of the Techlash.

Unfortunately this backlash is also the perfect cover for authoritarian government action - they come across as responsive to voters while also reigning in firms that are more responsive to American citizens and government officers than their own.

SlinkyOnStairs an hour ago | parent | prev [-]

Companies of the 20th century certainly weren't more ethical. (Though a few select tech companies seem to be intent on proving the opposite.)

But it's not really a fascism thing. While fascism does love the oppression of women, and the current crop of fascists have a notable connection to the Epstein case, this is a lot more boring.

Sam Altman's not a fascist, he's a wet noodle who sucks up to the Trump administration for money. He's not even good at it. The way his company handled CSAM does cast aspersions on Altman & the accusations from his sister, but all other evidence suggests he's just a moron acting recklessly. Not identifying the problem ahead of time, and acting poorly in response.

In the case of Meta. We know who Zuckerberg is. The company got it's start as, in crude terms, a sex pest website. The original "Facemash" website forcibly taken down by Harvard. This is not some new consequence of this turn to fascism, Zuckerberg's always been like this, and the actions taken against him were clearly not enough to avoid the company culture following his precedent.

deaux 10 minutes ago | parent [-]

> Companies of the 20th century certainly weren't more ethical.

Disagree, not on average. There was a non-trivially higher % of decisions made based on "what's good for the customer" or "what's good for the product" or "I would be ashamed to do this" and a lower % of decisions made based on "what maximizes profit in the next quarter". I think that is more ethical. To take it to an extreme, using slave labor because it's good for the customer is more ethical than using slave labor to maximize profit in the next quarter.

everdrive 3 hours ago | parent | prev | next [-]

Sounds about right. If you know someone who uses these smart glasses, it's important not to tolerate them whatsoever. Don't speak with them, interact with them. I wouldn't even recommend being in their presence.

jofzar 3 hours ago | parent | next [-]

There's a name for these people, glassholes

paulddraper 15 minutes ago | parent | prev | next [-]

Are GoPros acceptable?

I went to the beach, jet skiing with a dude with Meta glasses.

I liked the footage.

everdrive 11 minutes ago | parent [-]

>I liked the footage.

So did the Meta's LLM training model as well as the contractor across the globe reviewing your footage.

elevation 2 hours ago | parent | prev | next [-]

> I wouldn't even recommend being in their presence.

Great! Now do people with smart TVs and people with smart phones

intended an hour ago | parent [-]

Don’t we already hate the invasive ad tech industry?

Aren’t there already posts and articles on how to ensure that TVs don’t farm information from us?

Aaronstotle 2 hours ago | parent | prev | next [-]

I want to get the Oakley Meta ones so I can record bike rides easier, should I not be tolerated?

bombcar 15 minutes ago | parent | next [-]

Wear a GoPro on your helmet like the rest so you can be shunned.

If you insist on the glasses, wear a fake GoPro.

bee_rider 2 hours ago | parent | prev | next [-]

A mostly-solitary sporting event (or one where you know all the other participants and can get their consent to record beforehand) seems like a reasonable use of these sorts of glasses. I wouldn’t personally give consent just as a sort of privacy reflex, but it really depends on your social circle.

mplewis 28 minutes ago | parent | prev [-]

No. Fuck off

arowthway 3 hours ago | parent | prev [-]

Also make sure to avoid people with smartphones and places with video surveilence.

powvans 3 hours ago | parent | next [-]

Don't let perfect be the enemy of good.

There's also nothing stopping us from stigmatizing the use of smartphones in public. Even a slight discouragement of it would be progress. It doesn't have to be all or nothing.

HumblyTossed 3 hours ago | parent | prev | next [-]

Is this an honest argument? Surely you can think of how glasses might be ... in a different league than the two items you mention?

yreg 2 hours ago | parent | next [-]

Unless you are using these during sex I consider a microphone to be 10x more privacy intruding than a camera.

Security cameras afaik usually don't record audio, but all phones can. And they don't even need to be pointed in any specific direction.

zdp7 2 minutes ago | parent [-]

Many security cameras have the ability to record audio. Depending on where you are, it might be illegal to use it. All the cams I have purchased have it. That would include ReoLink and a recommended model from the Frigate site.

db48x 3 hours ago | parent | prev | next [-]

Not meaningfully. Anyone holding a smartphone might be recording you. You’d better avoid them if you don’t want to be recorded.

NBJack 2 hours ago | parent | next [-]

Most people don't run around holding out their smartphone directly in front of them. It has to be pointed at the subject, and tends to be obvious.

Smart glasses, however, are always aimed at whatever the wearer is looking at. They may or may not be recording (note the reports of people hiding the LED indicators), and at a fair distance could easily be mistaken for a normal pair.

The general populace is much more likely to notice the former recording rather than the latter.

recursive 2 hours ago | parent [-]

I've seen people keep their phone in their shirt pocket. The only reason it tends to be obvious is that most people aren't trying to be covert. Those aren't the ones you should be worried about.

bredren 3 hours ago | parent | prev | next [-]

This line makes a valid point. People record strangers all the time. In an obvious way or trying to be sneaky.

Just because you don’t notice it doesn’t mean it doesn’t happen.

However, this is still a different thing than smart glasses which can further be segmented into who designed the smart glasses.

azan_ 3 hours ago | parent | prev [-]

Someone has to hold smartphone and point it at you.

arowthway 2 hours ago | parent | prev [-]

Because person wearing glasses usually can move and video surveillance cameras usually can't? If that's not it then spell it out for me, please. Also, why would i be deceptive in this discussion? I feel like I missed some ideological conflict.

intended an hour ago | parent [-]

Imagine someone pulling up a smartphone and then recording everything that happens around them. Contrast that with someone wearing smart glasses and doing that exact same thing.

On a separate note, (and this is a genuine question) are you by any chance aware the term Non-consensual intimate imagery / NCII?

I am beginning to suspect that the average HN goer isn’t aware of the scope and scale of the Trust and Safety problem.

throwmeaway888 an hour ago | parent | next [-]

Have you heard the term non consensual intimate fantasies? I've heard it's an even bigger problem.

intended an hour ago | parent [-]

Well, you would fortunately be wrong. Fantasies are commonplace and well studied in society, psychology and even in the law.

The issue is when you go from fantasy to actually enacting it, which is usually when you earn the epithet of “Creep”.

Also, why make a throwaway for this line? I take it you haven’t heard of NCII?

salawat 33 minutes ago | parent | prev [-]

They don't care. Or they refuse to realize that tech isn't the solution to it, but an amplifier of it's scale.

Can tell you that my urge to take photos/record drastically dips around other people. Particularly if it were meant for any sort of commercial exploitation. Stephenson called people wired for max indiscriminate data collection/processing "gargoyles". Personally I prefer glassholes.

https://www.tabletmag.com/sections/news/articles/the-borg-of...

freehorse 3 hours ago | parent | prev [-]

If somebody was pointing a camera on me all the time? I would definitely avoid them.

amelius 3 hours ago | parent [-]

People do that on my subway all the time.

It's the camera of their smartphone.

Not sure if it's ON though.

voidUpdate 3 hours ago | parent [-]

They point the camera of their smartphone directly at you?

randallsquared 2 hours ago | parent [-]

At everything on the opposite side of the screen, typically. There is a recording light for Meta glasses, but not one for iPhones, for example: the "recording" indicators are all user-side there.

voidUpdate 2 hours ago | parent | next [-]

When I'm on public transport, people generally face their phones in such a way that they'd only be filming your feet or the floor... They don't hold them up at head height in such a way that other people would be recorded. Maybe it's just a cultural thing

amelius an hour ago | parent [-]

Examples:

https://www.sciencephoto.com/media/922925/view/three-people-...

https://www.istockphoto.com/nl/foto/happy-woman-using-smart-...

wolvoleo 2 hours ago | parent | prev [-]

Usually they are pointed at the ground when they're reading off them.

stackghost 2 hours ago | parent | prev | next [-]

Mark Zuckerberg and disrespect for user privacy.

Name a more iconic duo.

ignoramous 2 hours ago | parent | prev | next [-]

> the content they were paid to classify

  A Kenyan workers' organisation alleges Meta's decision was caused by the staff speaking out.

  Meta says it's because Sama did not meet its standards, a criticism Sama rejects ...
Frieren 2 hours ago | parent | prev | next [-]

Whistleblower protection is key for any working society. Only dictatorships and oligarchies protect criminals while shaming whistleblowers.

I do not care which country the outsourcing company is in. When criminals go global, protection whistleblowers should go global too.

getnormality 3 hours ago | parent | prev [-]

Well, yeah. If I went straight to the press to trash the reputation of my client's product, rather than communicating internally first to help them proactively address the issues, I would expect to get fired.

Not that I am remotely interested in defending Meta, or optimistic that they would proactively address privacy issues. But I don't feel that sympathetic to the outsourcing company here either.

I don't know what happened behind the scenes. I'm just going off what is said and not said in the article. If I were whistleblowing about something like this, I would take pains to describe what measures I took internally before going public. I didn't see any of that here.

EDIT: Look, to be clear, I think it's bad that naive or uninformed people are buying video recorders from Meta and unintentionally having their private lives intruded on by a company that, based on its history, clearly can't be trusted to be a helpful, transparent partner to customers on privacy. I think it's good that the media is giving people a reminder of this. I think it's good that the sources said something, even though the consequences they suffered seem inevitable. But to me, there is nothing essentially new to be learned here, and I don't know what can or should be done to improve the situation. I think for now, the best thing for people to do is not buy Meta hardware if they have any desire for privacy. Maybe there are laws that could help, but what should be in the laws exactly? It's not obvious to me what would work. I suspect that some of the reason people buy these products is for data capture, and that will sometimes lead to sensitive stuff being recorded. What should the rules be around this and who should decide? Personally I don't know.

elphinstone 3 hours ago | parent | next [-]

What makes you think the outsourcing firm didn't raise these concerns in email or meetings? You think these people wanted to lose jobs and income? That's irrational.

Why reflexively defend a massive tech corporation caught repeatedly violating the law?

Tangurena2 an hour ago | parent [-]

> Why reflexively defend a massive tech corporation caught repeatedly violating the law?

Because it is the natural expansion of the quote attributed to Upton Sinclair:

> Socialism never took root in America because the poor see themselves not as an exploited proletariat, but as temporarily embarrassed millionaires

ImPostingOnHN 3 hours ago | parent | prev | next [-]

You would help conceal a crime against the people just because it's good business??

Congratulations, you have a bright future in politics and/or tech CEOing.

Bridged7756 32 minutes ago | parent [-]

More like a bright future being someone's fall guy. The ignorance to think that a large tech giant like Facebook would give a crap about any of those concerns makes this person too politically inept to make it anywhere

giraffe_lady 3 hours ago | parent | prev | next [-]

There are transgressions severe enough that your duty to stop them is heavier than your responsibility to "the reputation of your client's product." Amazing this needs to be stated, frankly.

noir_lord 3 hours ago | parent [-]

Beautifully and succinctly put.

OutOfHere 3 hours ago | parent | prev [-]

Proactively address the issues? Are you kidding me? This is not an issue that just happened to slip by; it is 100% by design. You're fooling no one.

getnormality 3 hours ago | parent [-]

What specifically do you mean? It is by design that smart glasses see the things happening in front of their users? Yes, it is. That is why people buy them.

OutOfHere 3 hours ago | parent | next [-]

Huh. There you go again, thinking everyone else is an idiot. Capture of video data of users by Meta is never acceptable. It would not be acceptable for any phone, and it is not acceptable for any glass, ever.

fibonacci_man 3 hours ago | parent | next [-]

Saving the data for any purpose other than allowing users to access it is bad enough; allowing Meta employees or contractors to view personal videos is on a whole new level.

getnormality 2 hours ago | parent | prev [-]

I don't know why people buy smart glasses. Maybe they buy them for video capture. If so, the videos go to Meta's servers and Meta might do things with them. They might be criticized for not reviewing them in certain cases. That's one reason why I wouldn't buy Meta smart glasses.

3form 2 hours ago | parent [-]

If only we had the technology to record video without sending it to Meta's servers.

ImPostingOnHN 3 hours ago | parent | prev [-]

The main issue here is Facebook employees viewing users' private video streams (including of user nudity) without the users' knowledge.

The secondary issue is that it's generally frowned upon to make your employees view nudity in the workplace. Are there extenuating circumstances here? No, we have no evidence there are any extenuating circumstances here.

redbell 3 hours ago | parent | prev | next [-]

> "We see everything - from living rooms to naked bodies," one worker reportedly said.

> Meta said this was for the purpose of improving the customer experience, and was a common practice among other companies.

Am I reading this correctly?! This is probably the weirdest statement I've read on the internet in twenty years.

ryandrake 2 hours ago | parent | next [-]

> > Meta said this was for the purpose of improving the customer experience, and was a common practice among other companies.

> Am I reading this correctly?! This is probably the weirdest statement I've read on the internet in twenty years.

It's total fantasy. I've worked in big tech. Casually uploading and providing company/contractor access to non-redacted intimate photos or pictures of the insides of people's homes vaguely "for the purpose of improving the customer experience" would not pass even a surface-level privacy or data-protection review anywhere I've ever worked. Do Meta even read what they are saying?

2ndorderthought 2 hours ago | parent | next [-]

Well you gotta give out black mail material to the scam centers somehow. Otherwise they don't actually have leverage! Oh right... We don't want that happening.

finghin 2 hours ago | parent | prev | next [-]

With lawyers like these, …

intended an hour ago | parent | prev [-]

I’ve worked in trust and safety - for me this is stupid, but well below the threshold of impossible.

Hell, I know of a major firm that decided QA was not needed for their trust and safety process.

Another common issue will be SEA Arabic speakers tasked with labelling Middle Eastern Arabic content, because accents and cultural dialects are not a thing.

I’ve had people at FAANG firms cry on my shoulder, because they couldn’t get access to engineering resources at their own firms.

There was the famous case of meta executives overriding T&S policy and telling them that what content was news worthy during the Boston bombing. On a separate incident, they told their team that cartel violence was not newsworthy when friends in London complained about it.

When you say this is fantasy, what do you mean precisely?

abustamam 17 minutes ago | parent | next [-]

Meta could at least pretend that they don't intend to capture people in their most intimate and vulnerable moments instead of slobbering on the sideline like "mm... Data..."

ryandrake 35 minutes ago | parent | prev [-]

What I mean is: I'm not sure what they base their statement that it's "a common practice among other companies" on. Unlikely they are talking about their peer companies. I suppose if you read the sentence literally, there surely exist one or more "other companies" in the broad universe of "other companies" that routinely do this kind of stuff. But I wouldn't think anywhere serious.

DuncanCoffee 2 hours ago | parent | prev | next [-]

I once read the manual of one of those small floor cleaning robots (Ecovacs Deebot U2 pro), and it basically said that by using it you were giving them a right to take pictures and send them to a remote server (to analyze issues or something like that)

pfortuny 7 minutes ago | parent | prev | next [-]

Tagging, tagging, tagging. That is what "improving...": teaching its LLMs and diffusion models.

dotancohen an hour ago | parent | prev | next [-]

  > Am I reading this correctly?!
What you should have read correctly was the Facebook terms of service. I still get strange responses when I tell people that I don't use WhatsApp. All Meta's properties are tainted such that I won't use them.
falcor84 an hour ago | parent [-]

> What you should have read correctly was the Facebook terms of service.

I'm reminded of Bo Burnham's wonderful "That Funny Feeling" from 2021's "Inside", where one of the absurd examples he offers in the lyrics is:

  There it is again, that funny feeling
  That funny feeling
  Reading Pornhub's terms of service ...
chneu 2 hours ago | parent | prev | next [-]

How is this weird? People have been trading away their privacy for the smallest possible gains in convenience for a long time.

moritzwarhier 2 hours ago | parent [-]

Are you conflating telemetry with literally live-streaming your life to Meta? Because that's what makes the statement weird.

edit 2: OK, I see what you mean. But I'm wondering if it should be possible to consent to this via T&C. Basically the same issue as with many online services, turned up to 11, sure. And it involves OTHER people, who have not consented.

Stuff like this used to be outrage fuel even when it was more of a social experiment, e.g. the documentary "We live in public" or the "Big brother" TV show. By now, I'm sure there have been millions of influencers doing similar things, but it's very much not considered normal?

Streaming to an unknown number of employees might be considered different from streaming to the public, sure.

But the core question here is whether there's informed consent, and, IMO also, if it should be possible to consent to this when the other party is a company like Meta and the pretext is not deliberately seeking attention (like influencers and streamers do).

edit, clarified social media comparison

2ndorderthought 2 hours ago | parent | prev [-]

Meta is a defense contractor. They see the world a little differently from everyone else.

HarHarVeryFunny 3 hours ago | parent | prev | next [-]

Not sure which is worse here - that Meta are recording video from customers' smart glasses, or that they are firing people who talk about it.

embedding-shape 2 hours ago | parent | next [-]

The latter, as they can't even claim to have done so by accident, or "it was just bug".

OutOfHere 3 hours ago | parent | prev | next [-]

Everything having to do with Meta, starting with its very name, has been evil from the start.

SV_BubbleTime 2 hours ago | parent | prev [-]

Can I squeeze in the just a teeny tiny bit of… why the hell are you wearing an internet camera on you while naked and/or having sex?

… although I really extend that to why are you wearing an internet connected camera that is obviously going to be monitored by Meta.

embedding-shape 2 hours ago | parent | next [-]

So already, this person wearing these glasses are already agree with that Meta can monitor them. They also probably trust Meta when they say "When the glasses are off, nothing is recording", for better or worse. So with that perspective in mind, it's not far fetched to assume these same people will willingly be naked into front of these recording devices they believe to be off.

Of course, anyone who opened a newspaper in the last 10 years or so would know better, but I can definitively see some people not giving a fuck about it.

Tangurena2 an hour ago | parent | prev | next [-]

There are "content creators" who intentionally record people without any sort of consent. At least when they point cameras, one can notice the cameras and take action. With these sorts of glasses, no one in view has consented, nor have they agreed to any sort of terms & conditions.

I never understood the appeal of upskirt pictures. But I think that taking videos of non-consenting participants/victims is the current version of the upskirt photo craze.

sunaookami 2 hours ago | parent | prev [-]

The Ray-Ban stays ON during sex!

reliablereason 2 hours ago | parent | prev | next [-]

I wonder under what circumstances footage from the glasses are uploaded for classification.

Probably this is people asking the glasses something about what they see and the glasses uploading video for classification to generate an answer.

People think it is "just AI" so are not very concerned about privacy.

pfortuny 7 minutes ago | parent [-]

Always by default I assume.

jmull 2 hours ago | parent | prev | next [-]

I believe the tricky privacy and security issues around smart glasses (and other "personal" tech) can be navigated successfully enough by a thoughtful, diligent, responsive company.

Which is why I'd never touch a person tech device from Meta.

Their entire DNA is written to exploit their users for profit. In my judgement, they literally cannot and will never consider those issues as anything other than something to obscure to keep people unaware of the depth of the exploitation.

swiftcoder 2 hours ago | parent | prev | next [-]

One of the bigger commercial niches for smart glasses is filming POV porn, so it is hardly surprising that sort of content ended up in the moderation queue. The project should have planned to account for that use case.

ozozozd 4 minutes ago | parent | next [-]

How do you moderate what people do? You send someone to stop them from having sex because it was streamed to your servers?

swiftcoder 2 hours ago | parent | prev | next [-]

And I do appreciate how awkward it is for Meta to admit that use case exists. Even in the Oculus Go days there were a bunch of polite euphemisms internally to avoid mentioning "our device has to ship with a browser so people can watch porn on it"

hosteur 2 hours ago | parent | prev [-]

Why is there even a “ moderation queue”? Isn’t this people’s private recordings?

dylan604 an hour ago | parent | next [-]

This is my question too. I get moderating things that people are posting. Being not familiar with the device and how it works, I'd assume that all footage is posted to the user's cloud account even if not publicly posted. This being cloud storage, Meta is "moderating" the footage to ensure CSAM or other restricted footage type is not being stored on their (Meta's) platform. That's my very generous take on it, not that I believe it

inerte an hour ago | parent | prev | next [-]

Yes but also we don't want people live streaming murder and suicide, so there's detection and moderation in place.

intended an hour ago | parent | prev [-]

I’m betting this is going to some ML / Data labelling pipeline.

swiftcoder an hour ago | parent [-]

Yeah, moderation may instead be labelling in this case. Its likely the same type of firm handles both sorts of work on behalf of FAANG

intended an hour ago | parent [-]

Sounds plausible.

We could also toss vibe coded mess on top of this and probably get closer to the truth.

swiftcoder 34 minutes ago | parent [-]

The article itself is ambiguous on this point: "At the time of the publication, Meta admitted subcontracted workers might sometimes review content filmed on its smart glasses when people shared it with Meta AI."

That could be moderation, or it could be labelling new examples for training/validation

KaiserPro an hour ago | parent | prev | next [-]

Ex Meta employee here (yes you are right to boo):

The thing that really gets me is that internally there are 4 levels of data 1 being public domain shit (the sky is blue) up to 4 which is private user data, or something that is sensitive if leaked or shared.

I was told that by default all user data is level 4, as in if you do anything without decent approval, you're insta fired. There are many stories about at least one person a month during boot camp accessing user data and getting escorted out of the building within hours.

The part where I worked, in visual research, we had to jump through a years worth of legal hoops to get permission to record videos in public. We had to build an anonymisation pipeline, bullet proof audit trail, delete as much data as possible, with auto delete if something went wrong.

We had rigid rule about where that data could be stored and _who_ could access it. We were not allowed to share "wild" footage (ie data that might have the hint of anyone who hadn't signed a contract) for annotation because it would be given to a third party. THe public datasets we released all had traceable people, locations all with legal waivers signed.

Then I hear they just started fucking hosing private data to annotators to _train_ on? without any fucking basic controls at all? Just shows that whenever Zuck or monetsization want something, the rules don't apply.

I look forward to that entire industry collapsing in on it's self.

dntrkv 29 minutes ago | parent | next [-]

> I was told that by default all user data is level 4, as in if you do anything without decent approval, you're insta fired. There are many stories about at least one person a month during boot camp accessing user data and getting escorted out of the building within hours.

Given the size and nature of Meta's business, I would assume they would have better systems in place. SWEs should only have access to PII with explicit consent from users/customers e.g. support tickets.

Especially someone going through boot camp. Do they have access to de-anonymized user data during training?

Shit, at my last company I had to jump through so many hoops to access user data even with consent from the customer.

theplatman 33 minutes ago | parent | prev [-]

have always wondered about this especially post Cambridge Analytica where Meta imposed really stringent requirements for API use even for personal things while it was blatantly obvious that internally it was a different story

mproud 2 hours ago | parent | prev | next [-]

https://archive.ph/ubWba

touwer 2 hours ago | parent | prev | next [-]

Bigtech and the race to the bottom of the ethical pitt. We can still go lowerrrr!

sheepcow 3 hours ago | parent | prev | next [-]

If you want to read more about how unsavory aspects of AI-training are off-loaded onto poor workers in third-world countries, would recommend Karen Hao's "Empire of AI". These workers are paid pennies an hour for unstable jobs that expose them to some horrific material.

intended an hour ago | parent [-]

Which examples did they cover in the book?

jimmyjazz14 24 minutes ago | parent | prev | next [-]

It still blows my mind that anyone would volunteer to don these smart glasses, it's almost like some alien mindset to me.

lbrito 19 minutes ago | parent [-]

Its a reverse They Live!

mxfh 2 hours ago | parent | prev | next [-]

Meta ended its contract with Sama

At this scale, this sound like some insider joke contract made up only to make some hustle on the side capitalizing with stock options on the possibility of adhoc news trading bots glitching out on the keyword, here "x.com/sama" signals.

rufasterisco 28 minutes ago | parent | prev | next [-]

It would be refreshing for once to see the top comment to such articles to be

“Yes, we all know it, and we keep those app installed regardless“.

yaur 37 minutes ago | parent | prev | next [-]

It seems the issue is not the glasses users, but the people that the glasses users were having sex with. Did meta get their consent before redistributing this content?

sidcool 28 minutes ago | parent | prev | next [-]

Why would anyone trust Meta with their personal data! After a while it's just natural selection.

ghm2180 an hour ago | parent | prev | next [-]

About the "they asked us to view it and then fired us for it". Having worked in their RL division(I don't work at meta anymore) this story is quite weird for two reasons:

1. Meta AFAIR paid/compensated people — contractors or recruited via ads — to have them submit their data. There are strict privacy protocol and reviews in place to distinguish data use in these cases vs gen public. This is not to say the process is perfect, but if these users are gen public, I would be very shocked.

2. Hiring contractors to submit data is a more controlled environment VS recruitment of gen pub via ads to submit data, but the former has more well understood privacy disclosures than the latter. This means in practice asking contractors to wear glasses and "move around their surroundings naturally and do things" goes well with basically the privacy practice "the data your are submitting we can view and use all of it for purpose X and nothing but X". BUT this framing is with ad based recruited people — which are general users who willingly submit data — is much much harder. My suspicion is they are running ad based recruiting in general public and while those users may have signed a privacy statement it is very surprising that they did not tighten the privacy practices around the use of the data and who has access.

m-p-3 2 hours ago | parent | prev | next [-]

Absolutely no way I'd buy anything from Meta that has a camera built-in.

bluedino 2 hours ago | parent | prev | next [-]

What does "in row" mean? For us non-English English speakers.

e28eta 2 hours ago | parent | next [-]

“a noisy argument or fight”, from the Cambridge dictionary. I believe it’s primarily used in British English.

danparsonson 2 hours ago | parent | prev | next [-]

To add to the other replies, when it's an argument, it's pronounced like "how" not like "no".

bobthepanda 2 hours ago | parent | prev | next [-]

A row in this context is like a dispute or argument

prewett 2 hours ago | parent [-]

It's also pronounced r-ow (ow, as in I hurt myself) in this context, instead of r-oh, in case that helps the OP

oa335 2 hours ago | parent | prev | next [-]

in an argument

jacobtomlinson 2 hours ago | parent | prev [-]

"row" means "an argument"

jakecraige 2 hours ago | parent [-]

Yeah, I think it's more of a British English thing. It can also mean things like "in a fight". Like: "those two guys had a big row outside the pub the other night"

selimthegrim an hour ago | parent [-]

I always remembered it from Phantom Tollbooth "a DREADFUL Rauw"

prepend 2 hours ago | parent | prev | next [-]

I think Meta, like all companies, doesn’t want its subcontractors creating bad press for them.

So it doesn’t surprise me that Meta didnt renew/cancelled a contract that is a net negative for them. Arguing over the reason seems fruitless as no reason is needed per the terms of the contract (I assume since breach of contract wasn’t brought up by the sub).

malshe 2 hours ago | parent | prev | next [-]

A question for the HN folks who work for Meta - Is the pay so good that it makes it worth working for such a morally bankrupt organization?

allthetime 41 minutes ago | parent [-]

There are countless large, high paying, morally bankrupt companies out there. It’s no mystery that people continue to work for them.

cwillu an hour ago | parent | prev | next [-]

> Meta's glasses have a light in the corner of the frames that is turned on when the built-in camera is recording.

Because nobody knows how to put a dot of nail polish on an led they don't want seen, right?

loeg an hour ago | parent [-]

There is some detection for obstructing the LED. It's a little more clever than you assume.

f311a 3 hours ago | parent | prev | next [-]

Why do they even need workers to classify naked content? They could filter some content prior to passing it to workers. They already have models to moderate explicit content.

letmetweakit 3 hours ago | parent | prev | next [-]

Unfortunately this news will have no impact, neither on customer behavior, neither on policy, neither on Meta's behavior.

swyx 29 minutes ago | parent | prev | next [-]

this may be the greatest title i've seen on hacker news in a decade

I_am_tiberius 3 hours ago | parent | prev | next [-]

Not a fan of regulation in general, but would love to see a ban of cameras on glasses used in public spaces.

pxc an hour ago | parent | next [-]

The most important real use case of devices like this is as accessibility tech. Blind people everywhere are talking about devices like this.

It's the same with phones. I know blind people who have been harassed for holding their phones up to things as though they are taking pictures, but in fact they're using the camera on their phone to render signage legible to them, or having their phone (or a person on the other end) read it.

Banning this in a way that doesn't in practice cause problems for visually impaired people would be difficult. It might also be difficult to do in a way that doesn't harm, for instance, accountability for cops who are acting in public.

The impulse to "ban" is sometimes a bit naive imo.

stronglikedan 3 hours ago | parent | prev | next [-]

Why? What's the difference between that and one of the many, many concealed camera options that you don't even notice? Just that it's noticeable? I don't think that's a good enough reason for yet-more-regulation. You're already being recorded everywhere you go in public by the authorities, and often by people standing right next to you unnoticed, so just act accordingly.

jnovek 2 hours ago | parent | next [-]

“You're already being recorded everywhere you go in public by the authorities”

You are the frog being boiled.

stfp 2 hours ago | parent | prev | next [-]

Because they will be popular and lots of people will buy them and use them all the time, leading to much more generalized surveillance than the concealed options that only a tiny tiny fraction of people would buy or use (and that we should also regulate)

applfanboysbgon an hour ago | parent | prev | next [-]

> What's the difference between that and one of the many, many concealed camera options that you don't even notice?

The latter is literally illegal, at least in my country and I hope in any civilized country. If your point is that there's no difference between glasses and other forms of creep cams and the glasses should be illegal too, I concur!

Retr0id 2 hours ago | parent | prev | next [-]

The problem is if it becomes socially normalized. If you're using a concealed camera and someone notices, you're a creep/asshole.

intended an hour ago | parent | prev [-]

Yet more regulation? We have regulation for these glasses already?

Aren’t there countries that make it mandatory to blot out faces of people on videos if they didn’t consent?

schnitzelstoat 2 hours ago | parent | prev [-]

If anything they should be banned in private spaces, like if someone wearing them enters someone's home etc.

There is no expectation of privacy in public.

ldoughty 2 hours ago | parent [-]

The owner of the private space generally has authority to deny this already, there's no need for an additional law.

In the US at least, any private homeowner/renter can deny entry to their property, barring legal warrants and exceptional circumstances. A business can have a policy, and is generally legally protected as long as the policy is 1) equally applied, and 2) does not violate ADA... A court would have to weigh in if glasses are allowed or not for ADA... but I suspect there's already a case where a movie theater banned such glasses and they would probably(?) win, since such individuals could be expected to have non-recording glasses.

talkingtab 3 hours ago | parent | prev | next [-]

Meta said the contracting "did not meet (meta's) standards". I am sure that is true. meta's "standard" is not to reveal the illegal, immoral, unethical things meta does. No matter what the harm.

Maybe a company with those standards should not get our business. Oops, no wait, maybe they mean the Friedman Doctrine standards? In that case they are entitled to do any and every thing to make a profit. No matter what the harm.

[edit: add last two sentences]

jaidhyani 2 hours ago | parent | next [-]

I used to work for Meta. I quit largely because of intense frustrations with the company. Meta has made a lot of mistakes, overlooked a lot of harms, and made a lot of short-sighted, selfish choices. Many things about the world are worse than they could be because of choices Meta has made.

So that when I say that they really do have a zero tolerance policy for anyone using their internal systems to violate user privacy, it's not because I'm eager to defend them. It's just true (at least, it was when I was there). There are internal systems dedicated to making sure you have access to what you need to do your job, and absolutely nothing else. All content you interact with through internal tools is monitored and logged. If you get caught trying to use whatever access your job gives you for anything other than doing your job, security immediately escorts you out of the building. This is drilled into new hires early and often. For everything Meta gets wrong, they really do take this seriously.

malfist an hour ago | parent | next [-]

These contractors were hired to view this data. Your defense of Meta here doesn't make sense. Meta fired them for speaking out about the data Meta collects, not because they saw the data they were hired to look at.

nradov 9 minutes ago | parent [-]

Meta didn't fire individual independent contractors, they terminated a contract with a vendor. It's possible they did so because some of the vendor's employees spoke out but we don't know the real reason.

(I do think these smart glasses are super creepy and I'm not defending Meta's data collection practices.)

advisedwang an hour ago | parent | prev | next [-]

There's no allegation that these workers abused their access. The allegation is that their routine work reviewing footage included private content. The revelation is that USERS are using meta glasses non-consensually.

causal an hour ago | parent | prev | next [-]

The problem is that your comment and the one you're responding to can both be true: Just because the rules are heavily enforced does not mean the right rules are in place, starting with the fact that Meta is collecting this data to begin with.

thaumasiotes 20 minutes ago | parent [-]

> starting with the fact that Meta is collecting this data to begin with.

But that can't be the problem. They're collecting the data that users send them. To avoid collecting it despite the expressed wishes of the user, they'd need to be able to recognize it as untouchable.

And recognizing the data is the exact problem that this African firm was hired to help with. What do you want Meta to do?

DrewADesign a minute ago | parent | next [-]

Ok, let’s see that consent form and how explicitly it states that random call center people will possibly look at anything you record. I’ll bet you a crisp $50 it was a form designed to be as click-through-worthy as possible, being sure to not trigger the “wait, should I do this?” reflex in users, and also not loudly disclosing that you could still use the device without agreeing, if you even can, while still technically “””disclosing””” this information. The tech world has turned consent into a fucking joke.

magicalist 6 minutes ago | parent | prev [-]

> To avoid collecting it despite the expressed wishes of the user, they'd need to be able to recognize it as untouchable.

> And recognizing the data is the exact problem that this African firm was hired to help with. What do you want Meta to do?

This is written as if logically exhaustive, but it misses the very obvious alternative that none of these videos should have been reviewed by a human at all (aka no reason to "recognize it as untouchable"; they're all untouchable).

If you want to get stricter and talk about collecting at all, Meta already has that solution too, by leaving the video in the user's camera roll. Let the user manually add the video to the Meta AI app or whatever if they want to share it with others there.

thaumasiotes 3 minutes ago | parent [-]

> This is written as if logically exhaustive, but it misses the very obvious alternative that none of these videos should have been reviewed by a human at all (aka no reason to "recognize it as untouchable"; they're all untouchable).

No, taking that approach would mean that when someone sends you data that you aren't supposed to collect, you collect it anyway. This is the opposite of what was suggested above.

red_admiral 10 minutes ago | parent | prev | next [-]

Indeed, on this one point, Meta has higher standards than the NSA used to - Snowden mentioned that employees tracked their current wives/girlfriends so often it unofficially got the codename LOVEINT.

Same for "Meta reads your E2E whatsapp messages". Meta does many things, is probably massively net negative for civilisation, but it doesn't do that.

rkagerer an hour ago | parent | prev | next [-]

Many things about the world are worse than they could be because of choices Meta has made.

If Facebook were designed with a different set of incentives that prioritized the user, fostered positive engagement, and better respected individual's privacy and data sovereignty - setting a better standard for the whole industry - I feel there wouldn't be all this fuss today about banning social media accounts.

bilekas an hour ago | parent [-]

It's likely they wouldn't be as profitable too though, and their mandate to shareholders is to make number go up.

cozzyd an hour ago | parent | prev | next [-]

Anecdotal of course, but I heard that this wasn't at all the case circa 2006 and that (then) FB employees would routinely read private messages and such. Obviously it wasn't a big company yet and probably didn't have those policies yet... (clearly the policies are there for a reason...)

bombcar 20 minutes ago | parent [-]

That’s my recollection too - there were some high profile cases and so institutional safeguards were established. They very well may be at the forefront of it - however, it’s a side issue to what’s being discussed.

thunderfork 42 minutes ago | parent | prev | next [-]

As someone who worked for a contractor which had Meta as a client, I disagree.

All advertiser support agents were given super-read on all profiles & pages, and I never once observed a CSR being questioned on their use of this access in any way.

bombcar 19 minutes ago | parent [-]

It’s often the case that employees are much more locked down than contractors, simply because the company is more liable for employee actions.

keybored 16 minutes ago | parent | prev | next [-]

> I used to work for Meta. I quit largely because of intense frustrations with the company. Meta has made a lot of mistakes, overlooked a lot of harms, and made a lot of short-sighted, selfish choices. Many things about the world are worse than they could be because of choices Meta has made.

When did FaceBook make the world not-worse?

iJohnDoe 39 minutes ago | parent | prev | next [-]

@jaidhyani I hate to burst your bubble, but there are major privacy violations here.

https://news.ycombinator.com/item?id=47226756

iJohnDoe 40 minutes ago | parent | prev | next [-]

@jaidhyani I hate to burst your bubble, but there are major privacy violations here.

https://news.ycombinator.com/item?id=47225130

2ndorderthought 2 hours ago | parent | prev | next [-]

Yea but no. Meta is a defense contractor that hires out to 3rd parties exactly to do this. so you guys don't get to do that, but a lot of other people are. I hope that helped you sleep at night while you were there. But yea, it all gets bought and sold at the end of the day.

The irony is meta wants to implement verification to protect kids. Meanwhile it's doing everything it can to exploit them most at every single level for profit and for the love of the game. Billions of dollars, the world's most advanced computers all dedicated for it

an hour ago | parent [-]
[deleted]
bathtub365 an hour ago | parent [-]

Meta and their employees have spent years breaking the public’s trust over and over again. Why should we trust anything they say?

deaux 22 minutes ago | parent | prev [-]

You're still on the koolaid, as many replies here accurately point out. Saying it's not because you're eager to defend them is lying to yourself, because you're smart enough to think of most of these replies yourself. Primarily the fact that these are contractors whose entire job is to watch smart glasses footage and the point your bringing up - even if we believe it at face value - is completely irrelevant to this post.

If you truly want to atone for your sins, you have a long way to go. I don't blame you for having worked there, I've worked at places that are only a little better than Meta (which is hard considering Meta is at the absolute bottom of the entire ladder, including Peter Thiel companies, thanks to Meta's sheer scale of carnage). But its time to completely come to terms with the reality, rather than stopping halfway to try and feel better about your resume.

deepsquirrelnet 2 hours ago | parent | prev | next [-]

> At the time of the publication, Meta admitted subcontracted workers might sometimes review content filmed on its smart glasses when people shared it with Meta AI.

They just got fired for "piercing the veil". They committed the sin of bringing attention to the invasion of privacy.

alistairSH an hour ago | parent [-]

Were/are video recordings from the glasses advertised as being E2E encrypted?

Mostly, I'm just surprised that anybody would be naive enough to take a camera provided by Facebook into a sexual encounter and expect anything else.

ninth_ant 22 minutes ago | parent [-]

If you don’t disable the glasses they could continue to share content. The article describes the glasses being left on a dresser and then sharing content of people without their consent, which could easily parallel into showing a sexual encounter or other privacy-sensitive scenarios.

alistairSH 10 minutes ago | parent [-]

Sure, and the same is true with my iPhone or my Olympus. Except the former encrypts the video and the latter isn't internet-connected.

The problem here (other than Meta being Meta) is people assuming Meta isn't permanently operating in bad faith. I'm just surprised anybody into tech to the extent they'd buy first-gen VR glasses would be surprised at Meta doing Meta things. That's all, I guess.

burnte 2 hours ago | parent | prev | next [-]

Yeah, why the hell is Meta wa5tching people's videos either? Why PAY a company to invade our privacy and watch our videos? It's flipping BIZARRE.

stingraycharles 2 hours ago | parent [-]

Isn’t that obvious from the article? They’re labeling content for training AIs, something which is happening all over the world constantly.

2ndorderthought an hour ago | parent | next [-]

Yep gotta bake in that personal data into generative models so it can be reproduced later for profit.

woodson an hour ago | parent [-]

Why generative? Or has it been decided that only generative models are “AI”?

throwpoaster an hour ago | parent [-]

What kind of model "reproduce"s things later for profit that is not generative?

Ritewut an hour ago | parent [-]

Surveillance models.

jmye 15 minutes ago | parent | prev [-]

And then people are shocked that no one wants the data centers for this shit built in their backyard.

stingraycharles 2 hours ago | parent | prev | next [-]

Unfortunately in today’s world where organizations are larger than many a country’s GDP, they really only have to face responsibility towards shareholders and maximizing profits is the thing they usually care about.

throwpoaster an hour ago | parent | prev | next [-]

That's not what the Friedman Doctrine is, technically. It is that management should obey moral, ethical, and legal frameworks in the operation of the business for the benefit of its investors; and specifically NOT take actions which are outside of that narrow scope.

Avicebron 35 minutes ago | parent [-]

Does that include trying influence moral, ethical, and legal frameworks to the benefit of the investors as well? Because if it does it is kind of a moot point.

prepend 2 hours ago | parent | prev [-]

Is it illegal or immoral? Having Meta review this material has to be approved by users and has their consent.

There was an example in the article where a user’s glasses kept recording the user’s wife after he took them off. That’s bad but on the user, not Facebook.

Seems similar to a situation where someone takes nudes of someone without their consent and then sends them off to a lab to be printed. The lab isn’t doing anything illegal or unethical printing them when they ask the user “are these legal” and the user replies “yes.” Unless you want to stop photo printers from ever printing nudes, I think the responsibility is on the user, not the firm.

msh 2 hours ago | parent [-]

Is there explicit approval? Or is it buried in the legal agreements?

throwpoaster an hour ago | parent [-]

Legal agreements are explicit.

msh 32 minutes ago | parent [-]

Lol

theowsmnsn 3 hours ago | parent | prev | next [-]

Meta is so evil

0x1ceb00da 3 hours ago | parent [-]

Evil is the current meta

game_the0ry 24 minutes ago | parent | prev | next [-]

Can we boycott meta yet? I am sick of this company.

mmanfrin 33 minutes ago | parent | prev | next [-]

I got a paywall, first time I've seen that on BBC.

fortran77 an hour ago | parent | prev | next [-]

People have sex with their glasses on?

krupan an hour ago | parent | next [-]

I'm guessing at least some of these cases are where the glasses are sitting on a nightstand and still recording

kylehotchkiss an hour ago | parent | prev [-]

Are their partners even consenting to glasses with cameras??

dickeeT 2 hours ago | parent | prev | next [-]

i don't think smart glasses itself is a good idea

shevy-java 2 hours ago | parent | prev | next [-]

Facebook may have to rename itself into NaughtyBook or SpyBook or Pr0nBook. They really want people to help them spy on other people here - including their sex life. Expect new sexy videos in 3 ... 2 ...

hirvi74 2 hours ago | parent | prev | next [-]

Good. Anyone who works for such a company is immoral in my opinion.

aklemm an hour ago | parent | prev | next [-]

I bet the victims had their socks on too

JKCalhoun an hour ago | parent | prev | next [-]

Oops! Oh, too late. And another nail in the heart of smart glasses…

tamimio 2 hours ago | parent | prev | next [-]

> and was a common practice among other companies.

Meta isn’t lying, you should assume other companies are doing it too, Tesla did it with their cameras, and assume others like any company has access to your camera, I would even assume CCTV cameras too. It’s why for anything sensitive, try to use open source stacks, you might lose some of the features, but it’s a needed compromise.

jmyeet 2 hours ago | parent | prev | next [-]

So I've never had a smart speaker in my house (Alexa, Apple, Google). I've just never been comfortable with the idea of having an always-on cloud-connected microphone in my house. Not because I thought these companies would deliberately start listening and recording in my house but because they will likely be careless with that data and it'll open the door for law enforcement to request it. Consider the Google Wi-fi scraping case from STreetView.

Or they might start scanning for "problematic" behavior, a bit like the Apple CSAM fingerprinting initiative.

So not one part of me would ever buy Meta glasses (or the Snap glasses before that). You simply don't have sufficient control over the recordings and big tech companies can't be trusted, as we've witnessed from outsourced workers sharing explicit images. And I bet that's just the tip of the iceberg.

I honestly don't understand why anyone would get these and trust Meta to manage the risks.

xerox13ster 2 hours ago | parent | next [-]

That is to say nothing of the new technological use cases that could develop from the already existing technology. They just haven’t been thought of developed yet.

Things like audio scanning your living space using those Alexa smart speakers with ultrasonics to get an image of not only everything in your space, but where you are in that space as well.

That technological use case only came out within the last five or so years, maybe closer to eight. Either way I could see that coming before it became a thing just because ultrasound imaging of your unborn child is a thing ultrasound imaging of the sea floor is a thing so why wouldn’t ultrasound imaging of your living space be a thing by a company who wants to know what you buy.

I never ever ever had Alexa I only ever had a Google home because I got it for free with GPM but I almost never used it because I hated the idea of it always listening.

I already regret Wi-Fi because they figured out now how to look through walls with that.

intended an hour ago | parent | prev [-]

You were wise enough to avoid this, unfortunately for most people “shiny tech!”.

rickdg 3 hours ago | parent | prev [-]

This is what happens when you buy a camera from the "they trust me, dumb fucks" guy and put it on your face.

aeve890 an hour ago | parent [-]

But aren't the users wearing glasses while nude or having sex dumb fucks though?