Remix.run Logo
uyzstvqs 9 hours ago

People need to understand the difference between age indication and age verification. Two very different things. Age indication is a completely private and realistically as-effective alternative to the invasive age verification.

Age _indication_ means that when you set up your device or create a user account, you enter a date of birth for the user. The OS then provides a native API to return a user's age bracket (not full date-of-birth). If the user is a minor, the OS will require parental authentication in some way to modify the setting again. This can all be done completely offline. It works because parents almost always buy the devices used by children, and can enter the correct date-of-birth during setup.

Age _verification_ means that some online service has to verify your age, and collects a bunch of (meta)data in the process. This is highly problematic for privacy, security, and the open internet.

txrx0000 4 hours ago | parent | next [-]

There are two things very very wrong with the California law, which you call "age indication".

1) The parental responsibility is given to the wrong people. You're basically being forced by law to give all apps and websites your child's age on request, and then trusting those online platforms to serve the right content (lol). It should be the other way around. The apps and websites should broadcast the age rating of their content, and the OS fetches that age rating, and decides whether the content is appropriate by comparing the age rating to the user's age. The user's age, or age bracket, or any information about the user at all, should not leave the user's computer.

2) The age API is not "completely private". It's a legally-mandated data point that can be used to track a user across apps and websites. We must reject all legally-mandated tracking data points because it sets the precedent for even more mandatory tracking to be added in the future. We should not be providing an API that makes it easier for web platforms to get their hands on user data!

For many years, certain tech companies, SIGs, and governments have fought against technologies that could enable real digital parenting, all while claiming to do the opposite and "protecting children". They craft a narrative to convince you that top-down digital surveillance and access-control is for your own good, but it's time we reject that and flip their narrative upside down: https://news.ycombinator.com/item?id=47472805

heavyset_go 4 hours ago | parent | next [-]

> For many years, certain tech companies, SIGs, and governments have fought against technologies that could enable real digital parenting, all while claiming to do the opposite and "protecting children". They craft a narrative to convince you that top-down digital surveillance and access-control is for your own good, but it's time we reject that and flip their narrative upside down

The EFF has a good series related to this[1].

[1] https://www.eff.org/deeplinks/2026/03/rep-finke-was-right-ag...

ekr____ 3 hours ago | parent | prev | next [-]

> 1) The parental responsibility is given to the wrong people. You're basically being forced by law to give all apps and websites your child's age on request, and then trusting those online platforms to serve the right content (lol). It should be the other way around. The apps and websites should broadcast the age rating of their content, and the OS fetches that age rating, and decides whether the content is appropriate by comparing the age rating to the user's age. The user's age, or age bracket, or any information about the user at all, should not leave the user's computer.

FWIW, this is not quite an accurate description of AB1043, in at least three respects:

1. Apps don't get your exact age, just an age range.

2. Websites don't get your age at all.

3. AB1043 itself doesn't mandate any content restrictions; it just says that the app now has "actual knowledge" of the user's age. That's not to say that there aren't other laws which require age-specific behaviors, but this particular one is pretty thi on this.

In addition, I certainly understand the position that the age range shouldn't leave the computer, but I'm not sure how well that works technically, assuming you want age-based content restrictions. First, a number of the behaviors that age assurance laws want to restrict are hard to implement client side. For example, the NY SAFE For Kids act forbids algorithmic feeds, and for obvious reasons that's a lot easier to do on the server. Second, even if you do have device-side filtering, it's hard to prevent the site/app from learning what age brackets are in place, because they can experimentally provide content with different age markings and see what's accepted and what's blocked. Cooper, Arnao, and I discuss this in some more detail on pp 39--42 of our report on Age Assurance: https://kgi.georgetown.edu/research-and-commentary/age-assur...

I'm not saying that this makes a material difference in how you should feel about AB 1043, just trying to clarify the technical situation.

txrx0000 2 hours ago | parent | next [-]

Thanks for the clarification.

Regarding what to do with algorithmic feeds, instead of forcing platforms like Facebook to be less evil, we should give parents the ability to simply uninstall Facebook, and prevent it from being installed by the child. We could implement a password lock for app installation/updates at the OS-level that can be enabled in the phone's settings, that works like Linux's sudo. Every time you install/uninstall/update an app, it asks for a password. Then parents would be able to choose which apps can run on their child's device.

Notice their strategy: these companies make it hard/impossible for you to uninstall preloaded apps, and they make it hard to develop competing apps and OSes, and they degrade the non-preloaded software UX on purpose, which creates the artificial need to filter the feeds in existing platforms that these companies control. They also monopolize the app store and gatekeep which apps can be listed on it, and which OS APIs non-affliated apps can use. Instead of accepting that and settling with just filtering those existing platforms' feeds, we should have the option to abandon them entirely.

We need the phone hardware companies to open-source their device firmware, drivers, and let the device owner lock/unlock the bootloader with a password, so that we could never have a situation like the current one where OSes come preinstalled with bloat like TikTok or Facebook, and the bootloader is locked so you can't even install a different OS and your phone becomes a brick when they stop providing updates. If we allow software competition, then child protection would have never been a problem in the first place because people would be able to make child-friendly toy apps and toy OSes, and control what apps and OS can run on the hardware they purchased. Parents would have lots of child-friendly choices. This digital parenting problem was manufactured by the same companies trying to sell us a "solution" like this Cali bill or in other cases ID verification, which coincidentally makes it easier for them to track people online.

kelnos 2 hours ago | parent [-]

> instead of forcing platforms like Facebook to be less evil, we should give parents the ability to simply uninstall Facebook, and prevent it from being installed by the child.

Isn't that how parental controls already work?

There are problems, though:

1. The kids want to use Facebook. If parent A refuses to let their kid use Facebook, then kids B, C, D, E, F... all use Facebook and kid A becomes a social outcast. This actually happens. (Well, now it's other apps; kids don't use Facebook anymore.) This is similar to the mobile-phones-in-schools problem: if a parent doesn't let their kid bring a phone to school, and all the other parents do, that creates social isolation. When the school district bans the phones, it solves the problem for everyone. (So it's a collective action problem, really.)

2. Web browsers. Unless the parent is going to uninstall and disallow web browser use, the kid can still sign into whatever service they want using the web browser. I don't think parental controls block specific sites, and even if they do, there are ways around that, certainly.

I am very often the person who says that parents should actually parent their kids and not rely on the government to nanny them. But in this case I think there actually is value to the government making laws that make Facebook (etc.) less evil. And as a bonus, maybe they'll be forced to be less evil to adults too!

txrx0000 5 minutes ago | parent [-]

1. The current norm of social siloing apps was created by these tech companies in the first place. What regulators can do is discourage anti-competitive practices that lock users into specific software and hardware platforms. If there's plenty of competition for every kind of social app, and competition for OSes, and users could freely choose and move between them, then not having a particular app would not result in social isolation. This affects adults as well.

2. The OS has a firewall. But it's currently not user-controllable on your phone. Phone companies have decided you don't need that feature. But actually, they can easily implement a nice UI in the settings for the firewall and lock it behind a password, then parents would be able to use it to block individual websites. We can even make it possible to import/export site lists as a txt file so that you can download/share a curated block list that you or other parents made, to block many things at once. You could also do this for your entire home WiFi network in your WiFi router's settings, if your router's firmware has that feature.

And yeah, I agree that we should make the platforms less evil in general. But I think the way to do that is to give people the ability to easily ditch bad platforms and build new ones. Let the platforms actually compete, then the best will prevail. Right now, they don't prevail because of layers and layers of anti-competitive barriers. It would take great technical effort to regulate all the tricks these tech companies use, that's why I propose dealing with it at the root: make it so that all computer/phone hardware manufacturers must open-source their device drivers and firmware, and let the user lock/unlock the bootloader and install alternative OSes. If we do this, then the entire software ecosystem will fix itself over time along with all the downstream problems.

iririririr 2 hours ago | parent | prev [-]

[dead]

packetlost 2 hours ago | parent | prev | next [-]

1. I don’t see how that’s better in any real way. You can infer the exact same information as querying the range and it makes dynamic behavior based on age range (ex. access to age restricted chat rooms as an obvious example) completely impossible.

2. Is it meaningfully more identifying than User-Agent? There’s dozens of other datapoints for uniquely identifying a user. If we get a few high profile lawsuits because advertising companies knowingly showed harmful ads to children, I’d consider it a win. Age is not that interesting of a data point.

throwaway173738 2 hours ago | parent | next [-]

I wouldn’t focus on whether it’s “identifying” but whether it’s revealing. Young teenagers are a very high-value target for advertisers. They are very impressionable, and they provide a proxy for advertisers for their parents’ money. So this law essentially makes it mandatory to share that information with advertisers. And also by proxy, predators.

packetlost an hour ago | parent [-]

It also makes it explicitly illegal to do use it for such purposes. While I agree on the point, I think in practice it changes little. I also think it could be a net positive, because now there’s no plausible deniability about the targets age, opening up a decent amount of liability for exploitative practices targeting children specifically.

kelnos 2 hours ago | parent | prev | next [-]

> I don’t see how that’s better in any real way.

It's so much better. In the one case, the OS is leaking age information (even if just an age range) to every service it talks to. In the other case, the OS isn't telling anyone anything, and is just responding to the age rating that the app/service advertises.

packetlost an hour ago | parent [-]

That response reveals exactly the same information.

txrx0000 an hour ago | parent | prev [-]

1. Depends on how it's implemented. It won't identify you to individual platforms if the OS filters on a per-app or per-website basis. And yeah, there would be no dynamic behavior based on age, as that would enable tracking based on age. I don't think any kind of API is the ideal solution though, it's just better than the malicious one being mandated in the Cali bill. Instead of an API, it's simpler and more effective to just have an app installation lock (like sudo on Linux) and a firewall for website blocking with a nice UI in the phone's settings, locked behind a password/pin.

2. Other data points like User-Agent are not required by law, and browsers already spoof user agent by default. I agree that there are other data points we need to address, but the problem in this specific case is the slippery slope of legally-mandated data points. And I don't think winning high profile lawsuits is a real "win", it just exposes problem which we already know in this case. Keep in mind those people can get away with the Epstein files.

Ferret7446 4 hours ago | parent | prev [-]

> The apps and websites should broadcast the age rating of their content, and the OS fetches that age rating, and decides whether the content is appropriate by comparing the age rating to the user's age.

How would you make that happen? Many websites would not be subject to your jurisdiction.

txrx0000 4 hours ago | parent | next [-]

Assume they're 18+ then.

But even that's still not a great solution. I outline a better solution that doesn't require any legal enforcement at all, in the link at the bottom of my original comment.

kelnos 2 hours ago | parent | prev | next [-]

So? The same problem exists for having the OS broadcast the user's age range to all apps/services/websites: the service outside your jurisdiction doesn't have to actually restrict content based on age.

At least with the reverse system (services broadcast an age rating), you have some nice properties:

1. You can set it up so that if the service doesn't broadcast an age rating, access is denied.

2. You aren't leaking age information (even if it's just a range) to random websites outside your jurisdiction.

ekr____ 3 hours ago | parent | prev [-]

We're actually seeing this play out right now with the server-based age assurance systems which are already widely deployed and mandated under the UK Online Safety Act and laws in about 25 US States. In many cases, the sites just comply, presumably because they are worried that the regulators have a way to reach them even if they aren't hosted in the relevant jurisdiction. In some cases, however, the sites just ignore the regulations or tell the regulators to pound sand, as 4Chan is doing with UK OfCom: https://www.bbc.com/news/articles/c624330lg1ko

heavyset_go 5 hours ago | parent | prev | next [-]

It's a distinction that hinges on one law from one state that doesn't reflect the reality of the dozens of laws in dozens of states, nor proposed federal legislation, that all require age verification via AI face scans and ID uploads.

That's to say, this distinction is meaningless unless you're planning on blocking every jurisdiction outside of California so you can just adhere to its age verification laws and no one else's.

Havoc 5 hours ago | parent | prev | next [-]

That's just setting things up for a smoother slippery slope...

As appealing as the private part sounds I genuinely think it may make the situation worse here by facilitating the transition & muddying the waters

EmbarrassedHelp 8 hours ago | parent | prev | next [-]

The issue though with "age indication" is that it creates an additional flag that can be used to fingerprint users. But it is infinitely preferable to any sort of age verification or age assurance.

ddtaylor 2 hours ago | parent | prev | next [-]

A pointless slippery slope to attempt to stand on that points directly at the Overton Window being drawn around this.

ekr____ 9 hours ago | parent | prev [-]

I like the term "age indication". Thank you.

If I may nitpick, the conventional term for systems which attempt to determine the user's age is "age assurance". This covers a variety of techniques, which are typically broken down into:

* Age estimation, which is based on statistical models of some physical characteristic (e.g., facial age estimation).

* Age verification, which uses identity documents such as driver's licenses.

* Age inference, which tries to determine the user's age range from some identifier, e.g., by using your email address to see how old your account is.

These distinctions aren't perfect by any means, and it's not uncommon to see "age verification" used for all three of these together but more typically people are using "age assurance".