Remix.run Logo
ranyume 15 hours ago

This might be off-topic but on-topic about child safety... but I'm surprised people are being myopic about age verification. Age verification should be banned, but people ignore that nowadays most widely used online services already ask for your age and act accordingly: twitter, youtube, google in general, any online marketplace. They already got so much data on their users and optimize their algorithms for those groups in an opaque way.

So yeah, age verification should be taken down, as well as the datamining these companies do and the opaque tunning of their algorithms. It baffles me: people are concerned about their children's DMs but are not concerned about what companies serves them and what they do with their data.

nandomrumber 13 hours ago | parent | next [-]

> people are concerned about their children's DMs but are not concerned about what companies serves them and what they do with their data.

Hogwash.

Where are these mythical people who aren’t concerned with both?

jbstack 10 hours ago | parent [-]

> Where are these mythical people who aren’t concerned with both?

They're called politicians.

LoganDark 15 hours ago | parent | prev | next [-]

Monitoring children's DMs is the responsibility of the parents, not megacorps. If a parent wants to install a keylogger or screen recorder on their child's PC, that's their decision. But Google should not be able to. Neither should... literally anyone else except maybe an employer on a work-provided device.

ranyume 15 hours ago | parent | next [-]

> Monitoring children's DMs is the responsibility of the parents, not megacorps

Absolutely. But what responsibilities do megacorps have? Right now, everyone seems to avoid this question, and make do with megacorps not being responsible. This means: "we'll allow megacorps to be as they are and not take any responsibilities for the effects they cause to society". Instead of them taking responsibilities, we're collecting everyone's data and calling it a day by banning children from social networks... and this is because there are many interests involved (not related to child development and safety).

acuozzo 14 hours ago | parent | next [-]

> But what responsibilities do megacorps have? Right now, everyone seems to avoid this question

Clear, simple, direct: Whatever was required of The Bell Telephone Company and nothing more.

da_chicken 14 hours ago | parent | next [-]

So there should be a human operator manually gatekeeping every individual request to connect with another endpoint?

It's a good thing those human operators couldn't listen in to whichever conversation they wanted.

acuozzo 13 hours ago | parent [-]

Human operators were not required of The Bell Telephone Company by law. Bell switched to mechanical switching stations as soon as doing so was economically advantageous.

(Reconsider my post. I'm arguing for no regulation.)

lmz 10 hours ago | parent [-]

Sure. And "lawful access" intercept capabilities are also required of telcos.

ranyume 14 hours ago | parent | prev | next [-]

I'd say that at minimum social networks need to be required to show how their algorithm works and allow users control over their data. They must be able to know why a content was served to them. Nowadays social networks are so pervasive in society, affecting it and molding it to unknown interests, that this is the bare minimum for a free society.

Ideally, users should be able to modify the algorithm, so they can get just what they want, while simultaneously maximizing free speech. If something isn't illegal, it shouldn't be hidden or removed.

drnick1 5 hours ago | parent | next [-]

> Nowadays social networks are so pervasive in society, affecting it and molding it to unknown interests

I think this is the real issue. We should free ourselves from "social networks" such as Tiktok, Facebook, Instagram and others. Even with direct messages truly E2EE, they create countless other privacy problems. They enable surveillance of people at scale and should be completely shunned for that reason alone.

acuozzo 14 hours ago | parent | prev [-]

> social networks need to be required to show how their algorithm works

Hypothetically speaking: What if it's a neural network in which each user has his/her own unique weights which are undergoing frequent retraining?

Would it not be an undue burden to necessitate the release of the weights every time they change?

Also, what value would the weights have? We haven't yet hit the point of having neural networks with interpretability.

Wouldn't enforcing algorithmic interpretability additionally be an undue burden?

> They must be able to know why a content was served to them.

What if the authors of the code are unable to tell you why?

BlueTemplar 12 hours ago | parent [-]

The use of black boxes like neural networks is already effectively illegal in some governments for this very reason.

techpression 13 hours ago | parent | prev | next [-]

I don’t remember reading about ads in phone calls, nor the complete mapping of customers behaviors to use in contexts not being the phone call.

The apples to oranges in this comparison is probably top five on HN ever.

iso1631 9 hours ago | parent | prev [-]

Whatever was required of the new york times and nothing more.

If the NYT publishes and advert or editorial, it's held accountable for the contents.

j16sdiz 14 hours ago | parent | prev | next [-]

> But what responsibilities do megacorps have?

fake and scam AD.

they literally profit from those ADs. When the AD distributes malware or make scam, they don't take any responsibility

LoganDark 14 hours ago | parent | prev [-]

> But what responsibilities do megacorps have?

They should have a responsibility of transparency, accountability and empathy towards users. They should work for the user and in the interests of the user. But multiple constraints make this impossible in practice.

prmoustache 6 hours ago | parent | prev | next [-]

I also think children do/should have a right to privacy and their parents do not have to know everything.

Kids should be able to write a journal or talk to friends with total trust that this information will not reach their parents.

KaiserPro 11 hours ago | parent | prev | next [-]

> Monitoring children's DMs is the responsibility of the parents, not megacorps.

Yup, but the tools provided make that easy or hard.

But putting that emotive bit to one side, Megacorps have a vested interest in not being responsible to children. They need children's eye balls to drive advertising revenue. If that means sending them corrosive shit, then so be it.

Its a bigger issue than encryption, its editorial choice.

gzread 7 hours ago | parent | prev | next [-]

The simplest way that can work is for the child account to be linked to a parent account, and the parent account can see the child account's DMs.

baq 13 hours ago | parent | prev | next [-]

Mega corps should be compelled to and rewarded for allowing parents to monitor their children’s dms.

duped 15 hours ago | parent | prev | next [-]

Parents shouldn't give their child access to a device that allows DMs.

That said, these platforms are making it impossible for parents to monitor anything. They're literally designed to profit off addiction in children.

greygoo222 15 hours ago | parent [-]

Why? Plenty of children benefit from talking to other people. Some children need careful monitoring, and some children shouldn't be allowed to use DMs, but it's not universal and should be up to the parents.

iso1631 7 hours ago | parent [-]

Control over who they can talk to (if needed), certainly monitoring of both who they talk to and in many situations what the contents are

At some point between the age of 0 and 18 the child has to be fully ready for an independent world. A cliff edge is a terrible idea, allowing 3 year olds unmonitored uncontrolled conversations with strangers is a terrible idea, but not allowing 15 year olds to talk to their friends is a terrible idea.

DANmode 12 hours ago | parent | prev | next [-]

> maybe an employer on a work-provided device.

The children yearn for the mines(?).

iso1631 9 hours ago | parent | prev [-]

I'm all for helping parents to do this. Any site requiring age verification should indicate this as a http header or whatever, and the browser I allow my child to use should respect that and the parental controls should be easy for me to engage with

Many parental controls are massive pains to get working. Apple does fairly well (although I don't get a parental pin number to unlock the phone, which is normally fine as my child will tell me, but in some circumstances it wouldn't be), but does require the parent to be on the apple ecosystem too.

EA and Microsoft however are terrible, especially as it's likely the child will be playing fortnite/minecraft and the parent won't have ever touched it. I think with minecraft we had to make something like 5 or 6 accounts across three different sites to allow online minecraft play from a nintendo switch.

Dban1 13 hours ago | parent | prev | next [-]

I thought it was common knowledge to just set your birthdate to 1970 or something

input_sh 12 hours ago | parent [-]

You can make it a nice round 2000 these days.

Nursie 13 hours ago | parent | prev | next [-]

> Age verification should be banned

Why?

> They already got so much data on their users

There are a variety of ways (see "Verifiable Credentials") that ages can be verified without handing over any data other than "Is old enough" to social media services.

shakna 13 hours ago | parent | next [-]

Age verification obliviates anonymity on the internet. If everything you do, _can_ be tracked by the government, it _will_ be.

Allowing for more effective propaganda, electrol control, and lights a fire on the concept of a government _representing_ anyone.

Nursie 13 hours ago | parent | next [-]

> Age verification obliviates anonymity on the internet.

How so?

Please explain in detail, because there are already schemes such as "verifiable credentials" which allow people to prove they are of age without handing over ID to online services.

shakna 11 hours ago | parent | next [-]

Last time my government tried that, they failed. [0]

You need to 100% trust those verification services. And considering their success rate [1], you shouldn't.

[0] https://thinkingcybersecurity.com/DigitalID/

[1] https://discord.com/press-releases/update-on-security-incide...

Nursie 10 hours ago | parent [-]

> You need to 100% trust those verification services.

First link - mitigation: use a well supported standard like OIDC, not a home-cooked scheme. Duh.

Second link - this is part of the problem such schemes as verifiable credentials are designed to address, random third parties collecting ID they don't need.

Yes, any system needs to be executed well. Neither of these really display that.

shakna 10 hours ago | parent [-]

If _the government_ can't be trusted not to use a dumbass scheme, then no, it isn't a duh moment. You don't exactly get to dictate how the government implements it!

The point is that systems today, aren't really well executed. So it is unreasonable to expect them to be well executed.

If you can't trust people not to build the bomb well - then don't let them build a bomb.

Nursie 9 hours ago | parent [-]

> You don't exactly get to dictate how the government implements it!

Who was talking about the government implementing it? I wasn't.

And also "This has been done poorly in the past so we should never attempt to do it again, better" seems an odd way to go about things. There are well put together schemes by international standards bodies in this area now. Neither of the above links followed them.

shakna 9 hours ago | parent [-]

If neither follow them, why do you have such faith that anybody would...?

Nursie 9 hours ago | parent [-]

I mean, your example of the ATO there isn't even an age verification thing, it's a defective clone of OIDC, so by that logic we should ban all SSO or identity delegation solutions?

Because we don't believe anyone will ever use the standards in this area, despite loads of companies and government bodies actually using OIDC already?

I'm not really sure what you're driving at.

shakna 8 hours ago | parent [-]

> I mean, your example of the ATO there isn't even an age verification thing, it's a defective clone of OIDC, so by that logic we should ban all SSO or identity delegation solutions?

MyGovID _is_ an age verifier. Sorry. The successor after the rebrand, is called myID [0], and advertised as:

> myID is a secure way to prove who you are online.

---

> I'm not really sure what you're driving at.

Clearly. You seem to think that because it might one day be done correctly, by one group, the rest of the world is safe. However, over in this reality, we have fuck ups by governments and private corporations, who are the people the rest of the world actually deals with.

You cannot enforce these real groups, to actually follow good practices. Thus, in practice, everyone gets fucked when you bring in these laws. Because it will always be done the wrong way, by someone.

[0] https://www.myid.gov.au/

Nursie 7 hours ago | parent [-]

> The successor after the rebrand, is called myID [0], and advertised as:

It's an identity scheme and SSO solution for accessing government services. As said at [0] in the "What is myID" section.

I sincerely hope that they're using something standard and well tested like OIDC behind the scenes this time, because otherwise it's ripe for another fuckup like the one you linked. If it is also used for age verification that appears to be secondary.

> You cannot enforce these real groups, to actually follow good practices. Thus, in practice, everyone gets fucked when you bring in these laws. Because it will always be done the wrong way, by someone.

So we need to stop the Australian government from ever using an SSO/identity solution again because it can't be trusted to do it properly, having messed up in the past, and the rest of us have had to live with the consequences. And as they aren't the only ones to have messed up, companies do it all the time too, we should also ban all identity and SSO solutions (because that's what we're talking about in this thread, banning of age verification, not mandating it).

I don't think you get to call out age validation as a uniquely hard problem that cannot possibly be made safe, but allow other identity-style services a pass. There are many areas in which we (through the government) can and do mandate good practice, both by government and private entities.

[0] https://my.gov.au/en/about/help/digital-id

afiori 12 hours ago | parent | prev [-]

because most implementations are not going to be like that.

Nursie 11 hours ago | parent [-]

In the context of "Age verification should be banned" though, we're already talking about legislative intervention. If there's no particular problem with schemes that are like that then we don't necessarily need a blanket ban on age verification.

Perhaps what we're really saying is "Ban age verification that collects lots of personal information".

Or perhaps we could distil it down further to "Ban unnecessary collection and storage of PII". In which case, Congrats! You've arrived back at the GDPR :)

Which I think is a good thing, and should be strengthened further.

(Also the other response to "because most implementations are not going to be like that" is "why not?". People are already building such ecosystems.)

AnthonyMouse 10 hours ago | parent [-]

> If there's no particular problem with schemes that are like that then we don't necessarily need a blanket ban on age verification.

There is a problem with schemes like that.

The way computer security works is, attacks always get better, they never get worse. A scheme that nobody has found any privacy holes in when it's enacted will have one found a week after.

The way governments work is, the compromise bill passes if the people who care about privacy support it because then it has the votes of the people who care about privacy and the people who want to ID everyone. But then when the vulnerability is found, the people who care about privacy can't get it fixed because they can't pass a new bill without also having the votes of the people who want to ID everyone, and those people already have what they want. More specifically, many of them then have what they really want, which is to invade everyone's privacy, as they were hoping to do once the vulnerability was found.

Which means you need it to be perfect the first time or it's already ossified and can't be fixed. But the chances of that happening in practice are zero, which means it needs to not happen at all.

Nursie 10 hours ago | parent [-]

> There is a problem with schemes like that.

/goes on to discuss how government legislation of specific schemes is the issue, not the schemes themselves.

Then we don't legislate specific schemes? The GDPR doesn't do that, for instance, it spells out responsibilities and penalties but doesn't say "Though shalt use this specific algorithm".

Remember, this discussion started with a call to ban all age checks, which itself is a government action and restriction on the agency of private business.

There are ways that private entities can implement age checks both securely and without leaking much other information, so it seems very heavy-handed to ban them. Private entities are building such systems between themselves already, without government mandates on the specifics.

Almondsetat 13 hours ago | parent | prev [-]

Ok, and? Presenting your ID at a number of IRL estamblishments also heavily reduces anonymity

gschizas 12 hours ago | parent | next [-]

The difference is that IRL establishments don't sell off that data to anyone else, nor do they have the ability to collate that data with data from other establishments to make a profile of you.

(at least not yet)

shakna 11 hours ago | parent | prev [-]

But to get that ID from the bottleo, you need to hold them at gunpoint.

To get it from Discord you need to sneeze.

The internet has scale and availability, that physical locations do not.

pjc50 10 hours ago | parent | prev | next [-]

The problem with this discussion is that this is a wonk solution for wonkish times. You're trying to thread the needle between various reasonable compromises. Ironically due to social media, that is simply not how politics and lawmaking works any more. Instead it's an emotionally driven fight between various different sorts of moral panic, and the only option is to get people more mad about surveillance than "think of the children".

You might be able to get somewhere by getting a tech company on your side, but they generally also hate adult content and don't mind banning it entirely.

(people are not going to get age verification _banned_ any time soon! That's simply not going to happen!)

echelon 13 hours ago | parent | prev [-]

It's a slippery slope.

This is the next two steps into 1984.

Once you start mandating this, there's no going back.

The next generation will start associating wrongthink with government IDs. (Wait, we already do that, right?)

sham1 13 hours ago | parent | next [-]

The Party doesn't care about the Proles, only the members of the Outer Party.

I think that it's rather funny that people like to appeal to 1984 as if the only point of Mr. Orwell was that surveillance is bad, missing the entire point about stuff like the control of the language or the idea that the only self-justification of the (Inner) Party is power for the sake of power (see also: The Theory and Practice of Oligarchical Collectivism).

I'd even go as far as to say that if "telescreens are horrible" is the only thing that someone takes away from 1984, they've frankly missed the point.

Nursie 13 hours ago | parent | prev | next [-]

> It's a slippery slope.

Is it? I thought that was a logical fallacy?

> This is the next two steps into 1984.

How so?

> Once you start mandating this, there's no going back. > The next generation will start associating wrongthink with government IDs.

Could you provide some more details on why you think this? For a start I talked about a scheme in which you don't hand over ID.

consp 12 hours ago | parent [-]

Slippery slope can be argumental if you provide the actual argumental reasoning for it as I was thought it could be used as deductive argumentation (though that does not say much). On itself it is a fallacy.

I don't see how verifiable credentials with zero knowledge proofs provide that however.

drawfloat 12 hours ago | parent | prev [-]

Read another book.

13 hours ago | parent | prev [-]
[deleted]