| ▲ | AnthonyMouse 17 hours ago |
| The main problem with the "report your age to the website" proposals is that they're backwards. You shouldn't be leaking your age to the service. Instead, the service should be telling your device the nature of the content. Then, if the content is for adults and you're not one, your parents can configure your device not to display it. |
|
| ▲ | ray_v 16 hours ago | parent | next [-] |
| It may often times be trickier than that - content often mixed of course. My 10 y/o hit me with a request yesterday to play Among Us where the age verification system wanted my full name, address, email, AND the last 4 digits of my SSN. I refused. |
| |
| ▲ | just6979 6 hours ago | parent | next [-] | | If the content is mixed, it makes even more sense to have the content supply the age data. This is how it has worked with broadcast media pretty much forever. TV shows and movies gain their ratings based on the worst case on display. IE: a show doesn't have to consist entirely of swearing to gain a "language" warning, it just has to have some. Definitively mixed content. I think your example exemplifies this. Among Us is not inherently adult-only, but since it's multiplayer, they don't control what other player say and do. Definitively mixed content. They should not be asking you to verify, they should be telling you and letting you decide if your kid can play. I kinda can't beleive their lawyers decided to go that route and assume all the PII responsibility that comes with collecting that data, instead of just making the "it's online and there might be d-bags on our servers" rating much more obvious and explicit. | | |
| ▲ | autoexec 4 hours ago | parent [-] | | They can profit off of the personal data they collect, so it's no surprise they'd take any opportunity and use any available excuse to collect more of it. From their perspective there is effectively zero responsibility to secure that data properly and handle it safely because there are effectively zero consequences for companies when they fail to. |
| |
| ▲ | alexfoo 15 hours ago | parent | prev | next [-] | | There's a good chance that they're never going to verify any of the information you give them, in which case it's another download for Mr M Mouse of 1375 E Buena Vista Dr, 32830, with a SSN that ends in 1234. | | |
| ▲ | TYPE_FASTER 8 hours ago | parent | next [-] | | I made the mistake of providing my date of birth as being 1/1/1900 on multiple websites, and have been receiving marketing material from the AARP in the mail for many years. | | |
| ▲ | roryirvine 6 hours ago | parent | next [-] | | My "birthdate" is the same as yours. It was fine when I started using it in the late 90s, but has become increasingly awkward over the past few years - lots of sites seem to assume a maximum age of 120. If I ever turn uBO off, the ads I get are mostly for funeral plans or incontinence products, with a smattering of "126 year old mom lost 30 lbs of belly fat - click to see how!" (yeah, decomposition's a bitch...) | | |
| ▲ | palmotea 6 hours ago | parent [-] | | > If I ever turn uBO off, the ads I get are mostly for funeral plans or incontinence products, with a smattering of "126 year old mom lost 30 lbs of belly fat - click to see how!" (yeah, decomposition's a bitch...) And, for the record, it's way better to get ads for BS like that than stuff that may actually influence you. |
| |
| ▲ | just6979 6 hours ago | parent | prev [-] | | That's not a mistake. You'd be getting spam marketing anyway, why not make sure it's something obvious? I always pick the oldest possible age when asked, just to mess with their data, because they shouldn't fucking care. Don't limit, notify. Has worked for TV (and movies to an extent, though theaters do limit somewhat, must have been some litigation around that...) pretty much forever. |
| |
| ▲ | b112 15 hours ago | parent | prev [-] | | Giving fake info feeds the machine. It means you still consume, and a bad actor profits. | | |
| ▲ | thbb123 14 hours ago | parent | next [-] | | I disagree. Giving fake info adds noise to the mechanism, makes it useless. Ultimately I'm inclined to believe that privacy through noise generation is a solution. If I ever find some idle time, I'd like to make an agent that surfs the web under my identity and several fake ones, but randomly according to several fake personality traits I program. Then, after some testing and analysis of the generated patterns of crawl, release it as freeware to allow anyone to participate in the obfuscation of individuals' behaviors. | | |
| ▲ | noam_k 12 hours ago | parent | next [-] | | You might want to take a look at differential privacy. It takes an unintuitive amount of noise to make the system useless. You also need to account for how "easy" it is to de-anonymize a profile. (Sorry I don't have links to sources handy.) | | |
| ▲ | aleph_minus_one 10 hours ago | parent [-] | | > You might want to take a look at differential privacy Differential privacy is just a bait to make surveillance more socially acceptable and to have arguments to silence critics ("no need to worry about the dangers - we have differential privacy"). :-( |
| |
| ▲ | b112 9 hours ago | parent | prev | next [-] | | Giving fake info adds noise to the mechanism Yes, but in this case which we're discussing: It may often times be trickier than that - content often mixed of course. My 10 y/o hit me with a request yesterday to play Among Us where the age verification system wanted my full name, address, email, AND the last 4 digits of my SSN. I refused. The bad actor still gets ROI, eg 'paid', for another bit of user data. Making the overall system less useful is good. However, not allowing a company to profit, and giving fake info still allows for that, is paramount. EG, even with fake info, many metrics on a phone are still gamed and profitable. That's why they're collected, after all. For profit. | |
| ▲ | autoexec 4 hours ago | parent | prev | next [-] | | > I disagree. Giving fake info adds noise to the mechanism, makes it useless. There's no such thing as useless info. Companies will sell it, buy it, and act on it regardless of how true it is. Nobody cares if the data is accurate. Nobody is checking to see if it is. Filling your dossier with false information about yourself won't stop companies from using that data. It can still cost you a job. It can still be used as justification to increase what companies charge you. It can still influence which policies they apply to you or what services they offer/deny you. It can still get you arrested or investigated by police. It can still get you targeted by scammers or extremists. Any and all of the data you give them will eventually be used against you somehow, no matter how false or misleading it is. Stuffing your dossier with more data does nothing but hand them more ammo to hit you with. | |
| ▲ | fsflover 12 hours ago | parent | prev [-] | | Sounds a bit like AdNauseam Firefox extension. | | |
| ▲ | autoexec 4 hours ago | parent | next [-] | | And just like AdNauseam using it would be dangerous and pointless. | |
| ▲ | thbb123 10 hours ago | parent | prev [-] | | In my vision, it's the opposite of ad blocker, it's something that generates non existent traffic and views beyond what I would have done. | | |
|
| |
| ▲ | acomjean 12 hours ago | parent | prev [-] | | Last century my dad would give our pets names out with our real phone #(oddly or by mistake). The pets did start getting phone calls. If the info becomes bad, it becomes much less useful and valuable. I’m in the us and we o need some rights to privacy. |
|
| |
| ▲ | cwillu 16 hours ago | parent | prev | next [-] | | That's an argument for “let the service inform the parent and let the parent decide”, not against it. | |
| ▲ | AnthonyMouse 16 hours ago | parent | prev | next [-] | | > It may often times be trickier than that - content often mixed of course. So put the content tag at the granularity of the content. | | |
| ▲ | onli 15 hours ago | parent | next [-] | | Awesome. Now you have a system where every blog entry, every Facebook post needs a lawyer consultation. Around 20 years ago, Germany actually made a law that would have enforced such a system. I still have a chart in my blog that explained it, https://www.onli-blogging.de/1026/JMStV-kurz-erklaert.html. Content for people over 16 would have to be marked accordingly or be put offline before 22:00, plus, if your site has a commercial character - which according to german courts is every single one in existence - you would need to hire a someone responsible for protecting teenagers and children (Jugenschutzbeauftragten). Result: It was seen as a big censor machine and I saw many sites and blogs shut down. You maybe can make that law partly responsible for how far behind german internet enterprises still are. Only a particular kind of bureaucrat wants to make business in an environment that makes laws such as this. Later the law wasn't actually followed. Only state media still has a system that blocks films for adults (=basically every action movie) from being accessed without age verification if not past 22:00. | | |
| ▲ | AnthonyMouse 15 hours ago | parent | next [-] | | > Now you have a system where every blog entry, every Facebook post needs a lawyer consultation. You have that with any form of any of these things. They're almost certainly going to be set up so that you get in trouble for claiming that adult content isn't but not for having non-adult content behind the adult content tag. Then you would be able to avoid legal questions by labeling your whole site as adult content, with the obvious drawback that then your whole site is labeled as adult content even though most of it isn't. But using ID requirements instead doesn't get you out of that. You'd still need to either identify which content requires someone to provide an ID before they can view it, or ID everyone. That's an argument for not doing any of these things, but not an argument for having ID requirements instead of content tags. | | |
| ▲ | onli 15 hours ago | parent [-] | | Funnily enough, marking content that's harmless as only for adults was also punishable, though that might have been in context of a different law. That would be censorship, blocking people under 18 from accessing legal content, was the reasoning. Welcome to German bureaucracy. But you are right. It's an argument that the "just mark content accordingly" is also not a better solution, not that ID requirements are in any way better. The only solution is not to enable this censorship infrastructure, because no matter which way it's done, it will always function as one. | | |
| ▲ | AnthonyMouse 14 hours ago | parent [-] | | > Funnily enough, marking content that's harmless as only for adults was also punishable, though that might have been in context of a different law. That would be censorship, blocking people under 18 from accessing legal content, was the reasoning. Welcome to German bureaucracy. That's how you get the thing where instead of using different equipment to process the food with and without sesame seeds, they just put sesame seeds in everything on purpose so they can accurately label them as containing sesame seeds. | | |
| ▲ | LinXitoW 12 hours ago | parent | next [-] | | An internet where every wikipedia article has like a picture of boobs as fine print would be very funny. | |
| ▲ | gzread 12 hours ago | parent | prev [-] | | I understand they can't say "contains sesame seeds" if it doesn't, but why can't they say "processed on equipment that also processes sesame seeds" like some packages do? | | |
| ▲ | AnthonyMouse 6 hours ago | parent [-] | | Some jurisdictions tried to ban them from saying maybe which is when they started putting them in on purpose so they could say definitely. |
|
|
|
| |
| ▲ | like_any_other 8 hours ago | parent | prev | next [-] | | > plus, if your site has a commercial character - which according to german courts is every single one in existence - you would need to hire a someone responsible for protecting teenagers and children (Jugenschutzbeauftragten). That is pretty much what the UK Online Safety Act requires: https://en.wikipedia.org/wiki/Online_Safety_Act_2023 Many small forums had to simply shut down, as was widely reported on HN at the time. | |
| ▲ | close04 15 hours ago | parent | prev [-] | | > Awesome. Now you have a system where every blog entry, every Facebook post needs a lawyer consultation. The alternative is that "just to be safe" you'll mark your entire site as needing age (identity, stool sample, whatever) verification. A single piece of sensitive content sets the requirements for the entire site. |
| |
| ▲ | valleyer 16 hours ago | parent | prev [-] | | Honestly, <span content-filter-level="adult">fuck</span> that. |
| |
| ▲ | VLM 9 hours ago | parent | prev | next [-] | | I would assume its fake and an attempt at identify theft at some level of the system. Is their PC infected at the OS level or just a fraudulent browser extension or something more like a popup ad masquerading as a system dialogue? A less trusting person would assume any request made by a computer is totally non-fraudulent and would gladly submit any requested private information. "Dad, I can't do my math homework, a pop up says you need to provide a copy of your bank statement, your mom's maiden name, and a copy of your birth certificate, SS card, and drivers license, and can you hurry up Dad, my homework is due tomorrow morning." And people will fall for this once they get used to the system being absurd enough. The fraud machine must be kept fed... | |
| ▲ | iso1631 13 hours ago | parent | prev [-] | | It feels to me that parental controls are seen as another profit centre. If we want to put laws in place, we should be putting in laws to empower parents. |
|
|
| ▲ | qzx_pierri 5 hours ago | parent | prev | next [-] |
| >Then, if the content is for adults and you're not one, your parents can configure your device not to display it. That would require people to be a responsible adult and actively parent their kids. It's ironic, because people in this country identify with how hard they grind at work, but refuse to put a fraction of that effort into being an involved parent. It's easier to just let the government ruin everyone else's good time online. In return, the parents: 1. Get the illusion that their kids are safer (they aren't) 2. Get a clear conscience, and feel better mentally equipped to run on their corporate hamster wheel |
|
| ▲ | tuetuopay 16 hours ago | parent | prev | next [-] |
| Heh that's already what parental controls do (granted, the website don't report the content, and it's based on blacklists), but they are trivial to bypass. Even the article mention it: > The child can install a virtual machine, create an account on the virtual machine and set the age to 18 or over It's precisely how I worked around the parental control my parents put on my computer when I was ~12. Get Virtualbox, get a Kubuntu ISO, and voilà! The funniest is, I did not want to access adult content, but the software had thepiratebay on its blacklist, which I did want. In the end, I proudly showed them (look ma!), and they promptly removed the control from the computer, as you can't fight a motivated kid. |
| |
| ▲ | AnthonyMouse 16 hours ago | parent | next [-] | | > but they are trivial to bypass. That's assuming the parental controls allow the kid to create a virtual machine. And then that the kid knows how to create a virtual machine, which is already at the level of difficulty of getting the high school senior who is already 18 to loan you their ID. None of this stuff is ever going to be Fort Knox. Locks are for keeping honest people honest. | | |
| ▲ | tuetuopay 15 hours ago | parent | next [-] | | We could argue on the technical feasability all day, as non-kvm qemu does not need any special permission to run a VM (albeit dog slow). I honestly don't really agree on the difficulty, as if this becomes a commonplace way to bypass such laws, you can expect tiktok to be full of videos about how to do it. People will provide already-installed VMs in a turnkey solution. It's not unlike how generations of kids playing minecraft learnt how to port forward and how to insatll VPNs for non-alleged-privacy reasons: something that was considered out of a kid's reach became a commodity. > None of this stuff is ever going to be Fort Knox. Locks are for keeping honest people honest. On that we agree, and it makes me sad. The gap between computer literate and illiterate will only widen a time passes. Non motivated kids will learn less, and motivated ones will get a kickstart by going around the locks. | | |
| ▲ | AnthonyMouse 15 hours ago | parent [-] | | > We could argue on the technical feasability all day, as non-kvm qemu does not need any special permission to run a VM (albeit dog slow). That's assuming the permission is for "use of kernel-mode hardware virtualization" rather than "installation of virtualization apps". Notice that if the kid can run arbitrary code then any of this was already a moot point because then they can already access websites in other countries that don't enforce any of this stuff. |
| |
| ▲ | Ntrails 15 hours ago | parent | prev | next [-] | | If the kid knows how to ask an llm, they can do whatever technical hacks are required | | |
| ▲ | autoexec 4 hours ago | parent [-] | | Would that make the LLM (or the company who made it) liable under the DMCA for showing someone how to work around a digital lock that controls access to a copyrighted work. |
| |
| ▲ | b112 15 hours ago | parent | prev | next [-] | | And then that the kid knows how to create a virtual machine It's just a bunch of clicks, even under linux. Just install virtualbox. It literally walks you through a VM creation. | | |
| ▲ | AnthonyMouse 15 hours ago | parent [-] | | > It's just a bunch of clicks I promise there are people who can't figure out how to do it. And again, the point of the lock on the door where you keep the porn is not to be robustly impenetrable to entry by a motivated 16 year old with a sledgehammer, it's only to make it obvious that they're not intended to go in there. | | |
| ▲ | bonoboTP 13 hours ago | parent [-] | | Depends on how much people want the hidden content. People in Eastern Europe, regular people, noch tech wiz kids, know how to use torrent and know about seed ratios etc. At least it was so ca 5 years ago. People can learn when the thing matters to them. Regular people want to get things done, the tinkering is not a goal for them in itself and they gravitate to simple and convenient ways of achieving things, and don't care about abstract principles like open source or tech advantages or what they see as tinfoil hat stuff. But if they want to see their favorite TV series or movie, they will jump through hoops. Similarly for this case. |
|
| |
| ▲ | stavros 15 hours ago | parent | prev [-] | | It might be Fort Knox just fine at some point, when computers will require a cryptographically signed government certificate that you're over 18, and you can't use the computer until you provide it. | | |
| ▲ | AnthonyMouse 15 hours ago | parent | next [-] | | Even in that case the large majority of the population would then have that certificate and the motivated minors would just beg, borrow or steal one. | |
| ▲ | voakbasda 8 hours ago | parent | prev [-] | | No one has ever faked a government ID? | | |
| ▲ | stavros 6 hours ago | parent [-] | | Nope, not a zero-knowledge proof with cryptographic signatures. |
|
|
| |
| ▲ | muyuu 13 hours ago | parent | prev | next [-] | | a kid who can install Linux, or set up an ssh tunnel to a seedbox, is a kid who doesn't need to be told by the government what he or she should be watching that is the job of parents/guardians | | |
| ▲ | sidewndr46 10 hours ago | parent [-] | | I'd actually argue that's exactly the kid who the government is there to tell them what they shouldn't be watching. The government is never really there to restrict the incompetent, they are pretty good at doing that themselves. | | |
| ▲ | muyuu 9 hours ago | parent [-] | | it's the kid they are up against to, but not the kid who "needs" it |
|
| |
| ▲ | ycombinator_acc 14 hours ago | parent | prev [-] | | There's an ocean of difference between your device changing behavior based on a flag set by individual sites and your device using a blacklist set by some list maintainer - the main difference being that the latter is utterly useless due to being an example of badness enumeration. |
|
|
| ▲ | idle_zealot 15 hours ago | parent | prev | next [-] |
| > Instead, the service should be telling your device the nature of the content. Then, if the content is for adults and you're not one, your parents can configure your device not to display it. That makes sense for purely offline media playback, but how could that work for a game or application or website? Ship several versions of the app for the different brackets and let the OS choose which to run? Then specifically design your telemetry to avoid logging which version is running? You'd also not be reporting your age, you'd be sending a "please treat me like an adult" or "please treat me like a child" flag. That's hardly PII. More like a dark/light mode preference, or your language settings (which your browser does send). |
| |
| ▲ | AnthonyMouse 14 hours ago | parent | next [-] | | > That makes sense for purely offline media playback, but how could that work for a game or application or website? Ship several versions of the app for the different brackets and let the OS choose which to run? Suppose you had an ID requirement instead. Are you going to make two different versions of your game or website, one for people who show ID and another for people who don't? If so, do the same thing. If not, then you have one version and it's either for adults only or it isn't. > You'd also not be reporting your age, you'd be sending a "please treat me like an adult" or "please treat me like a child" flag. Except that you essentially are reporting your age, because when you turn 18 the flag changes, which is a pretty strong signal that you just turned 18 and once they deduce your age they can calculate it going forward indefinitely. This is even worse if it's an automated system because then the flag changes exactly when you turn 18, down to the day, which by itself is ~14 bits of entropy towards uniquely identifying you and in a city of a 100,000 people they only need ~17 bits in total. | | |
| ▲ | gzread 12 hours ago | parent [-] | | The alternative wasn't an ID requirement, the alternative was the client/OS sending the flag to the server/app. | | |
| ▲ | AnthonyMouse 7 hours ago | parent [-] | | The fear is that once you have devices sending services a flag, some asshats are going to start demanding that it be verified by the government. But how does that do anything for you either way? Either you have two different versions based on whether the flag is present or not or you have one version and if it's adults only then you have to send the flag indicating you're an adult in order to use it. | | |
| ▲ | gzread 7 hours ago | parent [-] | | Browsers send a language flag to servers but I don't see anyone asking for a certification that you actually know that language. | | |
| ▲ | AnthonyMouse 4 hours ago | parent [-] | | I don't see anyone asking that browsers be legislatively required to send a language tag without certification either. |
|
|
|
| |
| ▲ | lavela 14 hours ago | parent | prev | next [-] | | The shifts between flags will correlate with date of birth though, or do you think someone turning 16 or 18 will wait a year or two to switch to more adult content for privacy? Also I'd guess the tech industry would push for more specific age buckets. Games already have PG ratings and similar in different countries, I don't see the issue there. Web content could set a age appropriateness header and let browsers deal with it, either for specific content or for the whole website if it relies on e.g. addictive mechanics. Applications is a wide field, but I'd be interested in specific examples where you think it wouldn't work. | | |
| ▲ | idle_zealot 14 hours ago | parent [-] | | > Applications is a wide field, but I'd be interested in specific examples where you think it wouldn't work. Sure. Take a game with voice chat. Child mode disables voice chat. How does the game, which presumably uses a load of telemetry, avoid incidentally leaking which users are children via the lack of voice telemetry data coming from the client? It's probably possible, but the fact is we're talking about third party code running on a computer, and the computer running different code paths based on some value. The third party code knows that value, and if it has internet access can exfiltrate it. In that sense, if there's an internet connection, there's not a meaningful difference between "the OS tells the service/app your age rating preference" and "the OS changes what it displays based on your age rating preference." Though while I'm throwing out fantasy policies we could solve this by banning pervasive surveillance outright. | | |
| ▲ | AnthonyMouse 6 hours ago | parent [-] | | You're assuming that everything not mandatory is prohibited. If the device is required to provide every service with the flag, every service gets the flag, even if it contains no adult content or adult content that the user agent could display or not without the service having a way to know about it. The service would then have to deduce the information instead of getting it explicitly and may be able to do that some of the time instead of all of the time, which is an improvement. And then people can work on anti-fingerprinting technologies with the premise that if they succeed it actually does something, instead of the information being required by law to leak to the service. |
|
| |
| ▲ | l72 11 hours ago | parent | prev [-] | | Games already have ratings. Every app submitted to the App Store or Google Play is rated. 90% of an R rated movie might be ok for a 12 year old but those one or violent or sex scenes makes it R. Should we be rating every scene in movies? Give parents general guidance and let them define the controls. |
|
|
| ▲ | glitchc 10 hours ago | parent | prev | next [-] |
| Windows already allows this. Content can be set based on age in Microsoft Family. Set an age on a user's account and MS curates the store experience, regardless of which computer the user is logged into. |
|
| ▲ | IndySun 13 hours ago | parent | prev | next [-] |
| Who decides the 'nature' of the content? Who decides what constitutes age appropriate? These questions of liberty are as old as the hills. And the keepers of the internet and virtually every single government past and present have repeatedly and endlessly shown themselves to be lying, conniving, self interested parties. When will 'we' ever learn? *who decides who 'we' are. |
|
| ▲ | avhception 11 hours ago | parent | prev | next [-] |
| I haven't even thought of this, I'm kinda surprised! This should be how it's done! |
|
| ▲ | gzread 12 hours ago | parent | prev | next [-] |
| It's necessary if the page contains mixed content. Under your proposal, Google Search would need a separate search page that shows adult content, and that would be even worse for privacy - logs would show whether you accessed the adult search page - and adult sites (not only porn) would try quite hard to not be relegated to that second, less discoverable, search page. |
| |
| ▲ | _heimdall 12 hours ago | parent [-] | | What you're describing with Google Search already exists, search engines already offer their own search settings including "safe search" or whatever they call it which filters out adult images. Services can absolutely decide to provide their own content settings. It doesn't require a universal setting or OS requirements, and it doesn't require providing PII to every website or telling a central authority every site you visit. |
|
|
| ▲ | panzi 14 hours ago | parent | prev [-] |
| Exactly. Except this way you can't build a complete biometric database if all citizen! Since it's so obvious how to do it correctly without creating such a database one could make the assumption the creation of such a database is the actual goal. |