| |
| ▲ | AnthonyMouse 17 hours ago | parent | next [-] | | The main problem with the "report your age to the website" proposals is that they're backwards. You shouldn't be leaking your age to the service. Instead, the service should be telling your device the nature of the content. Then, if the content is for adults and you're not one, your parents can configure your device not to display it. | | |
| ▲ | ray_v 16 hours ago | parent | next [-] | | It may often times be trickier than that - content often mixed of course. My 10 y/o hit me with a request yesterday to play Among Us where the age verification system wanted my full name, address, email, AND the last 4 digits of my SSN. I refused. | | |
| ▲ | just6979 6 hours ago | parent | next [-] | | If the content is mixed, it makes even more sense to have the content supply the age data. This is how it has worked with broadcast media pretty much forever. TV shows and movies gain their ratings based on the worst case on display. IE: a show doesn't have to consist entirely of swearing to gain a "language" warning, it just has to have some. Definitively mixed content. I think your example exemplifies this. Among Us is not inherently adult-only, but since it's multiplayer, they don't control what other player say and do. Definitively mixed content. They should not be asking you to verify, they should be telling you and letting you decide if your kid can play. I kinda can't beleive their lawyers decided to go that route and assume all the PII responsibility that comes with collecting that data, instead of just making the "it's online and there might be d-bags on our servers" rating much more obvious and explicit. | | |
| ▲ | autoexec 4 hours ago | parent [-] | | They can profit off of the personal data they collect, so it's no surprise they'd take any opportunity and use any available excuse to collect more of it. From their perspective there is effectively zero responsibility to secure that data properly and handle it safely because there are effectively zero consequences for companies when they fail to. |
| |
| ▲ | alexfoo 15 hours ago | parent | prev | next [-] | | There's a good chance that they're never going to verify any of the information you give them, in which case it's another download for Mr M Mouse of 1375 E Buena Vista Dr, 32830, with a SSN that ends in 1234. | | |
| ▲ | TYPE_FASTER 8 hours ago | parent | next [-] | | I made the mistake of providing my date of birth as being 1/1/1900 on multiple websites, and have been receiving marketing material from the AARP in the mail for many years. | | |
| ▲ | roryirvine 6 hours ago | parent | next [-] | | My "birthdate" is the same as yours. It was fine when I started using it in the late 90s, but has become increasingly awkward over the past few years - lots of sites seem to assume a maximum age of 120. If I ever turn uBO off, the ads I get are mostly for funeral plans or incontinence products, with a smattering of "126 year old mom lost 30 lbs of belly fat - click to see how!" (yeah, decomposition's a bitch...) | | |
| ▲ | palmotea 6 hours ago | parent [-] | | > If I ever turn uBO off, the ads I get are mostly for funeral plans or incontinence products, with a smattering of "126 year old mom lost 30 lbs of belly fat - click to see how!" (yeah, decomposition's a bitch...) And, for the record, it's way better to get ads for BS like that than stuff that may actually influence you. |
| |
| ▲ | just6979 6 hours ago | parent | prev [-] | | That's not a mistake. You'd be getting spam marketing anyway, why not make sure it's something obvious? I always pick the oldest possible age when asked, just to mess with their data, because they shouldn't fucking care. Don't limit, notify. Has worked for TV (and movies to an extent, though theaters do limit somewhat, must have been some litigation around that...) pretty much forever. |
| |
| ▲ | b112 15 hours ago | parent | prev [-] | | Giving fake info feeds the machine. It means you still consume, and a bad actor profits. | | |
| ▲ | thbb123 14 hours ago | parent | next [-] | | I disagree. Giving fake info adds noise to the mechanism, makes it useless. Ultimately I'm inclined to believe that privacy through noise generation is a solution. If I ever find some idle time, I'd like to make an agent that surfs the web under my identity and several fake ones, but randomly according to several fake personality traits I program. Then, after some testing and analysis of the generated patterns of crawl, release it as freeware to allow anyone to participate in the obfuscation of individuals' behaviors. | | |
| ▲ | noam_k 12 hours ago | parent | next [-] | | You might want to take a look at differential privacy. It takes an unintuitive amount of noise to make the system useless. You also need to account for how "easy" it is to de-anonymize a profile. (Sorry I don't have links to sources handy.) | | |
| ▲ | aleph_minus_one 10 hours ago | parent [-] | | > You might want to take a look at differential privacy Differential privacy is just a bait to make surveillance more socially acceptable and to have arguments to silence critics ("no need to worry about the dangers - we have differential privacy"). :-( |
| |
| ▲ | b112 9 hours ago | parent | prev | next [-] | | Giving fake info adds noise to the mechanism Yes, but in this case which we're discussing: It may often times be trickier than that - content often mixed of course. My 10 y/o hit me with a request yesterday to play Among Us where the age verification system wanted my full name, address, email, AND the last 4 digits of my SSN. I refused. The bad actor still gets ROI, eg 'paid', for another bit of user data. Making the overall system less useful is good. However, not allowing a company to profit, and giving fake info still allows for that, is paramount. EG, even with fake info, many metrics on a phone are still gamed and profitable. That's why they're collected, after all. For profit. | |
| ▲ | autoexec 4 hours ago | parent | prev | next [-] | | > I disagree. Giving fake info adds noise to the mechanism, makes it useless. There's no such thing as useless info. Companies will sell it, buy it, and act on it regardless of how true it is. Nobody cares if the data is accurate. Nobody is checking to see if it is. Filling your dossier with false information about yourself won't stop companies from using that data. It can still cost you a job. It can still be used as justification to increase what companies charge you. It can still influence which policies they apply to you or what services they offer/deny you. It can still get you arrested or investigated by police. It can still get you targeted by scammers or extremists. Any and all of the data you give them will eventually be used against you somehow, no matter how false or misleading it is. Stuffing your dossier with more data does nothing but hand them more ammo to hit you with. | |
| ▲ | fsflover 12 hours ago | parent | prev [-] | | Sounds a bit like AdNauseam Firefox extension. | | |
| ▲ | autoexec 4 hours ago | parent | next [-] | | And just like AdNauseam using it would be dangerous and pointless. | |
| ▲ | thbb123 10 hours ago | parent | prev [-] | | In my vision, it's the opposite of ad blocker, it's something that generates non existent traffic and views beyond what I would have done. | | |
|
| |
| ▲ | acomjean 12 hours ago | parent | prev [-] | | Last century my dad would give our pets names out with our real phone #(oddly or by mistake). The pets did start getting phone calls. If the info becomes bad, it becomes much less useful and valuable. I’m in the us and we o need some rights to privacy. |
|
| |
| ▲ | cwillu 16 hours ago | parent | prev | next [-] | | That's an argument for “let the service inform the parent and let the parent decide”, not against it. | |
| ▲ | AnthonyMouse 16 hours ago | parent | prev | next [-] | | > It may often times be trickier than that - content often mixed of course. So put the content tag at the granularity of the content. | | |
| ▲ | onli 15 hours ago | parent | next [-] | | Awesome. Now you have a system where every blog entry, every Facebook post needs a lawyer consultation. Around 20 years ago, Germany actually made a law that would have enforced such a system. I still have a chart in my blog that explained it, https://www.onli-blogging.de/1026/JMStV-kurz-erklaert.html. Content for people over 16 would have to be marked accordingly or be put offline before 22:00, plus, if your site has a commercial character - which according to german courts is every single one in existence - you would need to hire a someone responsible for protecting teenagers and children (Jugenschutzbeauftragten). Result: It was seen as a big censor machine and I saw many sites and blogs shut down. You maybe can make that law partly responsible for how far behind german internet enterprises still are. Only a particular kind of bureaucrat wants to make business in an environment that makes laws such as this. Later the law wasn't actually followed. Only state media still has a system that blocks films for adults (=basically every action movie) from being accessed without age verification if not past 22:00. | | |
| ▲ | AnthonyMouse 15 hours ago | parent | next [-] | | > Now you have a system where every blog entry, every Facebook post needs a lawyer consultation. You have that with any form of any of these things. They're almost certainly going to be set up so that you get in trouble for claiming that adult content isn't but not for having non-adult content behind the adult content tag. Then you would be able to avoid legal questions by labeling your whole site as adult content, with the obvious drawback that then your whole site is labeled as adult content even though most of it isn't. But using ID requirements instead doesn't get you out of that. You'd still need to either identify which content requires someone to provide an ID before they can view it, or ID everyone. That's an argument for not doing any of these things, but not an argument for having ID requirements instead of content tags. | | |
| ▲ | onli 15 hours ago | parent [-] | | Funnily enough, marking content that's harmless as only for adults was also punishable, though that might have been in context of a different law. That would be censorship, blocking people under 18 from accessing legal content, was the reasoning. Welcome to German bureaucracy. But you are right. It's an argument that the "just mark content accordingly" is also not a better solution, not that ID requirements are in any way better. The only solution is not to enable this censorship infrastructure, because no matter which way it's done, it will always function as one. | | |
| ▲ | AnthonyMouse 14 hours ago | parent [-] | | > Funnily enough, marking content that's harmless as only for adults was also punishable, though that might have been in context of a different law. That would be censorship, blocking people under 18 from accessing legal content, was the reasoning. Welcome to German bureaucracy. That's how you get the thing where instead of using different equipment to process the food with and without sesame seeds, they just put sesame seeds in everything on purpose so they can accurately label them as containing sesame seeds. | | |
| ▲ | LinXitoW 12 hours ago | parent | next [-] | | An internet where every wikipedia article has like a picture of boobs as fine print would be very funny. | |
| ▲ | gzread 12 hours ago | parent | prev [-] | | I understand they can't say "contains sesame seeds" if it doesn't, but why can't they say "processed on equipment that also processes sesame seeds" like some packages do? | | |
| ▲ | AnthonyMouse 6 hours ago | parent [-] | | Some jurisdictions tried to ban them from saying maybe which is when they started putting them in on purpose so they could say definitely. |
|
|
|
| |
| ▲ | like_any_other 8 hours ago | parent | prev | next [-] | | > plus, if your site has a commercial character - which according to german courts is every single one in existence - you would need to hire a someone responsible for protecting teenagers and children (Jugenschutzbeauftragten). That is pretty much what the UK Online Safety Act requires: https://en.wikipedia.org/wiki/Online_Safety_Act_2023 Many small forums had to simply shut down, as was widely reported on HN at the time. | |
| ▲ | close04 15 hours ago | parent | prev [-] | | > Awesome. Now you have a system where every blog entry, every Facebook post needs a lawyer consultation. The alternative is that "just to be safe" you'll mark your entire site as needing age (identity, stool sample, whatever) verification. A single piece of sensitive content sets the requirements for the entire site. |
| |
| ▲ | valleyer 16 hours ago | parent | prev [-] | | Honestly, <span content-filter-level="adult">fuck</span> that. |
| |
| ▲ | VLM 9 hours ago | parent | prev | next [-] | | I would assume its fake and an attempt at identify theft at some level of the system. Is their PC infected at the OS level or just a fraudulent browser extension or something more like a popup ad masquerading as a system dialogue? A less trusting person would assume any request made by a computer is totally non-fraudulent and would gladly submit any requested private information. "Dad, I can't do my math homework, a pop up says you need to provide a copy of your bank statement, your mom's maiden name, and a copy of your birth certificate, SS card, and drivers license, and can you hurry up Dad, my homework is due tomorrow morning." And people will fall for this once they get used to the system being absurd enough. The fraud machine must be kept fed... | |
| ▲ | iso1631 13 hours ago | parent | prev [-] | | It feels to me that parental controls are seen as another profit centre. If we want to put laws in place, we should be putting in laws to empower parents. |
| |
| ▲ | qzx_pierri 5 hours ago | parent | prev | next [-] | | >Then, if the content is for adults and you're not one, your parents can configure your device not to display it. That would require people to be a responsible adult and actively parent their kids. It's ironic, because people in this country identify with how hard they grind at work, but refuse to put a fraction of that effort into being an involved parent. It's easier to just let the government ruin everyone else's good time online. In return, the parents: 1. Get the illusion that their kids are safer (they aren't) 2. Get a clear conscience, and feel better mentally equipped to run on their corporate hamster wheel | |
| ▲ | tuetuopay 16 hours ago | parent | prev | next [-] | | Heh that's already what parental controls do (granted, the website don't report the content, and it's based on blacklists), but they are trivial to bypass. Even the article mention it: > The child can install a virtual machine, create an account on the virtual machine and set the age to 18 or over It's precisely how I worked around the parental control my parents put on my computer when I was ~12. Get Virtualbox, get a Kubuntu ISO, and voilà! The funniest is, I did not want to access adult content, but the software had thepiratebay on its blacklist, which I did want. In the end, I proudly showed them (look ma!), and they promptly removed the control from the computer, as you can't fight a motivated kid. | | |
| ▲ | AnthonyMouse 16 hours ago | parent | next [-] | | > but they are trivial to bypass. That's assuming the parental controls allow the kid to create a virtual machine. And then that the kid knows how to create a virtual machine, which is already at the level of difficulty of getting the high school senior who is already 18 to loan you their ID. None of this stuff is ever going to be Fort Knox. Locks are for keeping honest people honest. | | |
| ▲ | tuetuopay 15 hours ago | parent | next [-] | | We could argue on the technical feasability all day, as non-kvm qemu does not need any special permission to run a VM (albeit dog slow). I honestly don't really agree on the difficulty, as if this becomes a commonplace way to bypass such laws, you can expect tiktok to be full of videos about how to do it. People will provide already-installed VMs in a turnkey solution. It's not unlike how generations of kids playing minecraft learnt how to port forward and how to insatll VPNs for non-alleged-privacy reasons: something that was considered out of a kid's reach became a commodity. > None of this stuff is ever going to be Fort Knox. Locks are for keeping honest people honest. On that we agree, and it makes me sad. The gap between computer literate and illiterate will only widen a time passes. Non motivated kids will learn less, and motivated ones will get a kickstart by going around the locks. | | |
| ▲ | AnthonyMouse 15 hours ago | parent [-] | | > We could argue on the technical feasability all day, as non-kvm qemu does not need any special permission to run a VM (albeit dog slow). That's assuming the permission is for "use of kernel-mode hardware virtualization" rather than "installation of virtualization apps". Notice that if the kid can run arbitrary code then any of this was already a moot point because then they can already access websites in other countries that don't enforce any of this stuff. |
| |
| ▲ | Ntrails 16 hours ago | parent | prev | next [-] | | If the kid knows how to ask an llm, they can do whatever technical hacks are required | | |
| ▲ | autoexec 4 hours ago | parent [-] | | Would that make the LLM (or the company who made it) liable under the DMCA for showing someone how to work around a digital lock that controls access to a copyrighted work. |
| |
| ▲ | b112 15 hours ago | parent | prev | next [-] | | And then that the kid knows how to create a virtual machine It's just a bunch of clicks, even under linux. Just install virtualbox. It literally walks you through a VM creation. | | |
| ▲ | AnthonyMouse 15 hours ago | parent [-] | | > It's just a bunch of clicks I promise there are people who can't figure out how to do it. And again, the point of the lock on the door where you keep the porn is not to be robustly impenetrable to entry by a motivated 16 year old with a sledgehammer, it's only to make it obvious that they're not intended to go in there. | | |
| ▲ | bonoboTP 13 hours ago | parent [-] | | Depends on how much people want the hidden content. People in Eastern Europe, regular people, noch tech wiz kids, know how to use torrent and know about seed ratios etc. At least it was so ca 5 years ago. People can learn when the thing matters to them. Regular people want to get things done, the tinkering is not a goal for them in itself and they gravitate to simple and convenient ways of achieving things, and don't care about abstract principles like open source or tech advantages or what they see as tinfoil hat stuff. But if they want to see their favorite TV series or movie, they will jump through hoops. Similarly for this case. |
|
| |
| ▲ | stavros 15 hours ago | parent | prev [-] | | It might be Fort Knox just fine at some point, when computers will require a cryptographically signed government certificate that you're over 18, and you can't use the computer until you provide it. | | |
| ▲ | AnthonyMouse 15 hours ago | parent | next [-] | | Even in that case the large majority of the population would then have that certificate and the motivated minors would just beg, borrow or steal one. | |
| ▲ | voakbasda 8 hours ago | parent | prev [-] | | No one has ever faked a government ID? | | |
| ▲ | stavros 7 hours ago | parent [-] | | Nope, not a zero-knowledge proof with cryptographic signatures. |
|
|
| |
| ▲ | muyuu 13 hours ago | parent | prev | next [-] | | a kid who can install Linux, or set up an ssh tunnel to a seedbox, is a kid who doesn't need to be told by the government what he or she should be watching that is the job of parents/guardians | | |
| ▲ | sidewndr46 10 hours ago | parent [-] | | I'd actually argue that's exactly the kid who the government is there to tell them what they shouldn't be watching. The government is never really there to restrict the incompetent, they are pretty good at doing that themselves. | | |
| ▲ | muyuu 9 hours ago | parent [-] | | it's the kid they are up against to, but not the kid who "needs" it |
|
| |
| ▲ | ycombinator_acc 14 hours ago | parent | prev [-] | | There's an ocean of difference between your device changing behavior based on a flag set by individual sites and your device using a blacklist set by some list maintainer - the main difference being that the latter is utterly useless due to being an example of badness enumeration. |
| |
| ▲ | idle_zealot 15 hours ago | parent | prev | next [-] | | > Instead, the service should be telling your device the nature of the content. Then, if the content is for adults and you're not one, your parents can configure your device not to display it. That makes sense for purely offline media playback, but how could that work for a game or application or website? Ship several versions of the app for the different brackets and let the OS choose which to run? Then specifically design your telemetry to avoid logging which version is running? You'd also not be reporting your age, you'd be sending a "please treat me like an adult" or "please treat me like a child" flag. That's hardly PII. More like a dark/light mode preference, or your language settings (which your browser does send). | | |
| ▲ | AnthonyMouse 14 hours ago | parent | next [-] | | > That makes sense for purely offline media playback, but how could that work for a game or application or website? Ship several versions of the app for the different brackets and let the OS choose which to run? Suppose you had an ID requirement instead. Are you going to make two different versions of your game or website, one for people who show ID and another for people who don't? If so, do the same thing. If not, then you have one version and it's either for adults only or it isn't. > You'd also not be reporting your age, you'd be sending a "please treat me like an adult" or "please treat me like a child" flag. Except that you essentially are reporting your age, because when you turn 18 the flag changes, which is a pretty strong signal that you just turned 18 and once they deduce your age they can calculate it going forward indefinitely. This is even worse if it's an automated system because then the flag changes exactly when you turn 18, down to the day, which by itself is ~14 bits of entropy towards uniquely identifying you and in a city of a 100,000 people they only need ~17 bits in total. | | |
| ▲ | gzread 12 hours ago | parent [-] | | The alternative wasn't an ID requirement, the alternative was the client/OS sending the flag to the server/app. | | |
| ▲ | AnthonyMouse 7 hours ago | parent [-] | | The fear is that once you have devices sending services a flag, some asshats are going to start demanding that it be verified by the government. But how does that do anything for you either way? Either you have two different versions based on whether the flag is present or not or you have one version and if it's adults only then you have to send the flag indicating you're an adult in order to use it. | | |
| ▲ | gzread 7 hours ago | parent [-] | | Browsers send a language flag to servers but I don't see anyone asking for a certification that you actually know that language. | | |
| ▲ | AnthonyMouse 4 hours ago | parent [-] | | I don't see anyone asking that browsers be legislatively required to send a language tag without certification either. |
|
|
|
| |
| ▲ | lavela 15 hours ago | parent | prev | next [-] | | The shifts between flags will correlate with date of birth though, or do you think someone turning 16 or 18 will wait a year or two to switch to more adult content for privacy? Also I'd guess the tech industry would push for more specific age buckets. Games already have PG ratings and similar in different countries, I don't see the issue there. Web content could set a age appropriateness header and let browsers deal with it, either for specific content or for the whole website if it relies on e.g. addictive mechanics. Applications is a wide field, but I'd be interested in specific examples where you think it wouldn't work. | | |
| ▲ | idle_zealot 14 hours ago | parent [-] | | > Applications is a wide field, but I'd be interested in specific examples where you think it wouldn't work. Sure. Take a game with voice chat. Child mode disables voice chat. How does the game, which presumably uses a load of telemetry, avoid incidentally leaking which users are children via the lack of voice telemetry data coming from the client? It's probably possible, but the fact is we're talking about third party code running on a computer, and the computer running different code paths based on some value. The third party code knows that value, and if it has internet access can exfiltrate it. In that sense, if there's an internet connection, there's not a meaningful difference between "the OS tells the service/app your age rating preference" and "the OS changes what it displays based on your age rating preference." Though while I'm throwing out fantasy policies we could solve this by banning pervasive surveillance outright. | | |
| ▲ | AnthonyMouse 6 hours ago | parent [-] | | You're assuming that everything not mandatory is prohibited. If the device is required to provide every service with the flag, every service gets the flag, even if it contains no adult content or adult content that the user agent could display or not without the service having a way to know about it. The service would then have to deduce the information instead of getting it explicitly and may be able to do that some of the time instead of all of the time, which is an improvement. And then people can work on anti-fingerprinting technologies with the premise that if they succeed it actually does something, instead of the information being required by law to leak to the service. |
|
| |
| ▲ | l72 11 hours ago | parent | prev [-] | | Games already have ratings. Every app submitted to the App Store or Google Play is rated. 90% of an R rated movie might be ok for a 12 year old but those one or violent or sex scenes makes it R. Should we be rating every scene in movies? Give parents general guidance and let them define the controls. |
| |
| ▲ | glitchc 10 hours ago | parent | prev | next [-] | | Windows already allows this. Content can be set based on age in Microsoft Family. Set an age on a user's account and MS curates the store experience, regardless of which computer the user is logged into. | |
| ▲ | IndySun 13 hours ago | parent | prev | next [-] | | Who decides the 'nature' of the content? Who decides what constitutes age appropriate? These questions of liberty are as old as the hills. And the keepers of the internet and virtually every single government past and present have repeatedly and endlessly shown themselves to be lying, conniving, self interested parties. When will 'we' ever learn? *who decides who 'we' are. | |
| ▲ | avhception 11 hours ago | parent | prev | next [-] | | I haven't even thought of this, I'm kinda surprised! This should be how it's done! | |
| ▲ | gzread 12 hours ago | parent | prev | next [-] | | It's necessary if the page contains mixed content. Under your proposal, Google Search would need a separate search page that shows adult content, and that would be even worse for privacy - logs would show whether you accessed the adult search page - and adult sites (not only porn) would try quite hard to not be relegated to that second, less discoverable, search page. | | |
| ▲ | _heimdall 12 hours ago | parent [-] | | What you're describing with Google Search already exists, search engines already offer their own search settings including "safe search" or whatever they call it which filters out adult images. Services can absolutely decide to provide their own content settings. It doesn't require a universal setting or OS requirements, and it doesn't require providing PII to every website or telling a central authority every site you visit. |
| |
| ▲ | panzi 14 hours ago | parent | prev [-] | | Exactly. Except this way you can't build a complete biometric database if all citizen! Since it's so obvious how to do it correctly without creating such a database one could make the assumption the creation of such a database is the actual goal. |
| |
| ▲ | heavyset_go 17 hours ago | parent | prev | next [-] | | > if we don't do something, the trajectory is that ~every website and app is going to either voluntarily or compulsorily do face scans, AI behavior analysis, and ID checks for their users You're going to get that, anyway. Platforms want to sell their userbases as real monetizable humans. Governments want to know who says and reads what online. AI companies want your face to train their systems they sell to the government, and they want to the be the gatekeepers that rank internet content for age appropriateness and use that content as free training material. Age verification across platforms is already implemented as AI face and ID scans. This is where we're already at. | | |
| ▲ | idle_zealot 16 hours ago | parent | next [-] | | I am well aware of the alignment of interests and the dismal state of things. I'm of the opinion that the only way to divert is radical legal action that shatters the defense industry and social media titans, and it sure as hell won't be Gavin Newsom who delivers it. | |
| ▲ | gzread 12 hours ago | parent | prev [-] | | And laws like this California one actually make it more illegal. |
| |
| ▲ | tsukikage 14 hours ago | parent | prev | next [-] | | My objection to all this stuff is the requirement to share government ID / biometrics / credit card info etc with arbitrary third party sites, their 228 partners who value my privacy and need all my data for legitimate interest, and whatever criminals any of those leak everything to, and also give the government an easily searchable history of what I read when those sites propagate the info back. Any scheme that doesn’t require this won’t get pushback from me. As an alternative: I already have government-issued ID and that branch of government already has my private info; have it give me a cryptographic token I can use to prove my age bracket to the root of trust module in my computer; then allow the OS to state my age to third parties when it needs to with a protocol that proves it has seen the appropriate government token but reveals nothing else about my identity. Other alternatives are possible. | | |
| ▲ | biofox 13 hours ago | parent [-] | | That would require technical know-how. It's much easier for clueless lawmakers to write "the computer check the age", and make it everyone else's problem. |
| |
| ▲ | jprjr_ 9 hours ago | parent | prev | next [-] | | I think a better approach would be incentives versus punishments. Like - you don't make it illegal to not do age attestations, but you provide a mechanism to encourage it. You get a certification you can slap on your website and devices stating you meet the requirements of a California Family-Friendly Operating System or whatever. Maybe that comes with some kind of tax break, maybe it provides absolution in the case of some law being broken while using your OS, maybe it just means you get listed on a website of state-recommended operating systems. That certification wouldn't necessarily have to deal with age attestation at all. It could just mean the device/OS has features for parents - built-in website filtering, whatever restrictions they need. Parents could see the label and go "great, this label tells me I can set this up in a kid-safe way." Hell, maybe it is all about age certification/attestation. Part of that certification could be when setting it up, you do have to tell it a birthdate and the OS auto-applies some restrictions. Tells app stores your age, whatever. The point is an OS doesn't want to participate they don't have to. Linux distros etc would just not be California Family-Friendly Certified™. I wouldn't have to really care if California Family-Friendly Certified™ operating systems are scanning faces, IDs, birth certificates, collecting DNA, whatever. I'd have the choice to use a different operating system that suits my needs. | |
| ▲ | ApolloFortyNine 7 hours ago | parent | prev | next [-] | | >I ask because I feel like if we don't do something, the trajectory is that ~every website and app is going to either voluntarily or compulsorily do face scans, AI behavior analysis, and ID checks for their users, and I really don't want to live in that world. The only reason they'd _have_ to do that is government laws making them do so. When the law is vague around what age verification is, if one company decided to do ID verification, now any site that doesn't might not be doing 'enough' in the eyes of the law (it'd come down to a court case if not specifically defined). Though it may seem more convenient to just do it at the os level (though really the browser level would make more sense with a required header/cookie no?), I'd be shocked if you don't see it expanded in the future to be more than a checkbox. | |
| ▲ | pc86 8 hours ago | parent | prev | next [-] | | > I also agree with S76 that some laws regarding how an operating system intended for wide use should function are acceptable. The only laws the government should pass regulating software running on someones computer are laws protecting those consumers from the companies writing that software. For example, anti-malware/anti-spyware. The government has no business telling a random company that their software needs to report my age, whether it's unverified and self-reported or not. | | |
| ▲ | idle_zealot 5 hours ago | parent [-] | | > protecting those consumers from the companies writing that software Of course, but that's exactly the framing of the verification laws. Protecting underage computer users from products/services unsuitable for them. If you want protection to be effective then it needs to be on by default, but also needs to ultimately be controlled by the user, and it's that second part that ID checks and the like fail. |
| |
| ▲ | Xelbair 13 hours ago | parent | prev | next [-] | | Exactly the same way as i do now for such laws. It's pointless, does not increase security, does increase complexity of every interaction, and introduces a lot of weird edge cases. What i want is full anonymity enshrined in law, while at the same time giving parents, not governments, but parents, options to limit what their children can do on the internet. | |
| ▲ | sophrosyne42 16 hours ago | parent | prev | next [-] | | The push to do biometric data collection is entirely the result of entrepreneurs trying to get ahead before laws are passed. Their behavior is the result of the push to restrict the open internet. If we don't do anything, they will stop. You don't always have to do "something". Sometimes the harm comes by trying to do something. | |
| ▲ | edflsafoiewq 17 hours ago | parent | prev | next [-] | | What makes you think this is going to stave off that world? More likely you'll get both, since I doubt this API is going to satisfy other states' age verification requirements. | | |
| ▲ | idle_zealot 16 hours ago | parent [-] | | Sometimes a token effort or theater is sufficient to quell public sentiment. Like the oft-ignored and ineffective speed limits on roads, or the security theater at airports. That only handles the sentiment angle though. You still have to do something about would-be autocrats who want censorship and surveillance tools, and the oligarchs who want tracking and targeting data. |
| |
| ▲ | Hizonner 7 hours ago | parent | prev | next [-] | | > You get to pick it, it isn't mandatory that it be checked, and it doesn't need to be a date, just the bucket. Is that still too onerous? Yes, because (a) it wouldn't do anything, and (b) it would take about 5 seconds for the morons who push this stuff to start whining about that fact, and using the fact that "Society(TM) has mandated this and people are avoiding it" to demand effective verification, which would be a huge disaster. They won't be placated by anything short of total victory, and if you give them anything, you're just enouraging them. | |
| ▲ | latentsea 15 hours ago | parent | prev | next [-] | | > I agree. I also agree with S76 that some laws regarding how an operating system intended for wide use should function are acceptable. How would you react to this law if the requirement was only that the operating system had to ask the user what age bracket it should report to sites? You get to pick it, it isn't mandatory that it be checked, and it doesn't need to be a date, just the bucket. Is that still too onerous? What's the point in doing any of this if it doesn't result in materially better outcomes? | | |
| ▲ | idle_zealot 15 hours ago | parent [-] | | The point is that I think it's one of a few things that if done together could result in better outcomes. First, it standardizes parental controls, which ought to be so easy to use that failure to do so is nearly always a proactive decision on the part of the guardian. It doesn't need to be perfect, just reduce friction for parents and increase friction for kids accessing the adult internet. Second, it would signal to worried parents and busybodies that something has been done to deal with the danger that unmediated internet access might pose to minors. I don't think that it's a big issue, but a lot of energy has gone into convincing a lot of people that it is. The other part of achieving a good outcome would be to disempower those in the political and private sphere who benefit from a paranoid and censorious public and have worked to foment this panic. That's the much harder part, but it's not really the one being discussed here. I'm pitching the low-intrusiveness version to gauge sentiment here for that easier part of the path. | | |
| ▲ | latentsea 14 hours ago | parent | next [-] | | Your last point is the only one I partially agree with. The rest... will make no practical difference to what is going on in the world today. I genuinely think the only two solutions to this problem that are workable are "zero privacy, zero freedom" or "fuck the children, we don't care". Now, to be fair... there is a middle-ground that is neither of those options that I believe would be much more effective and allow us to retain our freedom and privacy and keep kids a lot safer. It's called education. But... no one will go for it, because I think for it to truly be effective you'd have to go as far as showing very young kids all the darkness that's out there and lay it out in paintstaking detail exactly how it works and deeply drill it into them. Ain't a snowballs chance in hell anyone would go for that, BUT... would it work? I'd bet you bottom dollar it would. The current extent of this education in public schools is a half hour visit from a police offer to the classroom and handing out a sheet to the kids and giving a 'good touch' / 'bad touch' talk. What's needed is a full length university level course on the whole topic from end to end. If you're in an adversarial relationship and need to defend yourself the best thing you can do is "know your enemy". But no... "they're too young to learn about that stuff, we need to shield them from it - think of the children!" is the reasoning people throw back at you when you suggest it. It hands down has to be the number one thing that could actually move the dial significantly, and it's just completely unpalatable to the majority of the populace. | |
| ▲ | hellojesus 9 hours ago | parent | prev [-] | | > First, it standardizes parental controls, which ought to be so easy to use that failure to do so is nearly always a proactive decision on the part of the guardian. If this mattered to the market, don't you think a company would have implemented it or would have been built to fill the need? | | |
| ▲ | idle_zealot 2 hours ago | parent [-] | | 1. No, I don't think that the market does what people want. That's not the primary reward signal. 2. I'm making an ought statement of values, like "we ought not pollute rivers." I don't really care what any system of resource allocation has to say about that. |
|
|
| |
| ▲ | thayne 17 hours ago | parent | prev | next [-] | | > How would you react to this law if the requirement was only that the operating system had to ask the user what age bracket it should report to sites? You get to pick it, it isn't mandatory that it be checked, and it doesn't need to be a date, just the bucket. Is that still too onerous? Isn't that what the CA law is? | | |
| ▲ | db48x 16 hours ago | parent | next [-] | | Almost. Technically an adult must create an account for any non–adult who wants to use the computer, and configure it with the appropriate age category. Honestly it’s the dumbest thing ever. Best just not to play that game. | | |
| ▲ | ohhnoodont 12 hours ago | parent [-] | | How is that dumb? It seems reasonable and pragmatic. If the current status quo is ID uploads and face scans, this seems like the better approach. It shifts the responsibility back to parents. All adult service operators have to do is filter requests with the underage HTTP header set. | | |
| ▲ | db48x 9 hours ago | parent [-] | | How about the part where children cannot legally create accounts of their own, on computers that they own? I did that by the time I was 10. > It shifts the responsibility back to parents. Without these stupid laws parents already _have_ that responsibility. | | |
| ▲ | ohhnoodont 7 hours ago | parent [-] | | > How about the part where children cannot legally create accounts of their own, on computers that they own? Where is that actually stated in any law being discussed? If a parent gives a child a device with admin access, that’s their choice to do so. But it also makes sense that we, the minds behind all of this technology, also provide parents with the most basic of tools to restrict a child’s access online and hold accountable companies that knowingly serve adult content to children. That’s all the CA law does AFAIK. Sure, my generation was raised on 4chan. But I can understand why parents today may want the tools to limit that. |
|
|
| |
| ▲ | idle_zealot 16 hours ago | parent | prev [-] | | Unfortunately no. There's a requirement that the OS disregard the user-indicated age if it has reason to think they're lying. Presumably this creates the obligation to monitor the user for such indicators. | | |
| ▲ | vineyardmike 16 hours ago | parent [-] | | I assume this is less "if they're lying" and more "if you've independently collected this data". It doesn't require you to challenge the user-indicated age, it requires you to use your own signal before that of the OS. As a silly example, tax software probably has your full birthday, including year, which is more precise. Many social networks collected this data, as did a lot of major tech companies that implemented parental controls already. |
|
| |
| ▲ | flir 15 hours ago | parent | prev | next [-] | | > Is that still too onerous? Isn't it just pointless? I'm getting upset by face scan creep too. I do not like it. No sir. But mandating a self-reporting mechanism feels about as useful as DNT cookies, or those "are you 18? yes/no" gates on beer sites. | | |
| ▲ | idle_zealot 14 hours ago | parent [-] | | It'd be more useful than DNT because there would be legal teeth on the side of the sender and receiver of the signal. It'd be more useful than the yes/no gates because an operating system could choose to allow the creation of child accounts. I.e. it would be a standardization of parental controls with added responsibility on sites/apps to self-determine if they should be blocked or limit functionality, rather than relying on big white/blacklists. Basically an infrastructure upgrade, rather than relying on a patchwork of competing private solutions to parental controls and age checks. The hope is also that a system like this would remove concerned parents from the list of supporters for pervasive mass surveillance and age scans. If they feel like you'd need to be a moron to miss the "This is a child device" button while setting up their kid's phone and laptop, and it's broadly understood that just pressing that button locks down what the device can access pretty effectively, that puts and damper on the FUD surrounding their child's internet usage. |
| |
| ▲ | iugtmkbdfil834 15 hours ago | parent | prev | next [-] | | Sadly, the only real response here is non-compliance. Recently, credit card company wanted me to provide ID upon login ( I was amused -- while my setup may not be common, it has not changed for years now ). So I didn't and just ignored it. I checked on it this month and it suddenly was fine. But then.. one has to be willing to take a hit to their credit and whatnot. The point remains though. They have zero way to enforce it if we choose to not comply. Just saying. | | |
| ▲ | idle_zealot 14 hours ago | parent [-] | | They have plenty of ways to enforce it. It's a law, they can take you to court. I guess it's easy to forget these days but laws do still apply to some people. If you're going to host a service, I guess consider using Tor or something. | | |
| ▲ | iugtmkbdfil834 14 hours ago | parent [-] | | Friend. On this very forum, you will normally see me argue that further deterioration of civil society is bad and we should be doing everything to maintain society as is. However, as with most things, there is a limit. That limit varies from person to person, but it is getting harder and harder to argue that laws apply ( especially once you recognize they don't quite apply across the board ). << If you're going to host a service, I guess consider using Tor or something. That one confused me. What do you mean? | | |
| ▲ | hellojesus 9 hours ago | parent [-] | | I think the person meant that if you don't comply there may be civil or criminal consequences, so if you want to knowingly provide a non compliant website or app, you should host it on tor to prevent your person from being the subject of the state. I know the CA law is civil only, so I don't think there is much CA can do if you publish an OS and don't make money from CA folks, but other implementations may decide to impose criminal penalties. | | |
|
|
| |
| ▲ | Alan_Writer 15 hours ago | parent | prev | next [-] | | Totally agree, but I think we are heading to a full intrusion system in every aspect. And this is just the beginning.
Even decentralized identity systems are not that decentralized, of course. | |
| ▲ | soulofmischief 10 hours ago | parent | prev [-] | | A cornerstone philosophy behind the American legal system is that we must view every single increase in State power as a potential slippery slope, and must prove that it isn't. In this case, it's a slippery slope; if we're normalized to this, what other incursions into our 1A rights to free speech, religious freedom and public gathering will we allow? And I say religious freedom, because these kinds of laws are largely peddled by religious folk or people who otherwise have been deeply influenced by early American Puritan religious culture. I, nor my children, should be forced to subject to such religiously-motivated laws. I can decide for myself and for my child what is appropriate. I, nor my children, cannot be compelled to enter personal information into a machine created by someone who is also illegally compelled to require it. I, nor my children, can be compelled to avoid publicly gathering on the internet just because we don't want to show identification and normalize chilling surveillance capitalism. I thought this was fucking America. |
|