Remix.run Logo
yalogin 14 hours ago

The big issue isn’t even age verification. The end goal is verified user identification. They want every transaction on the internet to be associated with the exact identity of the user. No more anonymity.

In the short term the way it will be implemented is this — age verification will not be a binary, it will also want to push your DoB, name, location etc and they say “the choice is with the user” but the default will be to send everything. Very soon there will be services that require DoB or name or something else to gate new or existing functionality. That is the slippery slope it will be built as and that is how they win the game

totetsu 11 hours ago | parent | next [-]

It’s not very soon, it’s already the case that if one wants to enable the latest models in the OpenAI api you have to submit your details to their “identity provider”.

abracadaniel 11 hours ago | parent | next [-]

Which is why it’s important to be able to run models locally. Which also might explain the strategy behind buying all of the memory that is or will exist for at least a year out. Maybe we’ll eventually see AI safety be used to prevent people from running local models.

PeterStuer 5 hours ago | parent | next [-]

You mean having to sign into your Microsoft account to get your bootloader co-signed before your legally mandated TPU 3.1 allows you to install a govenment blessed and sufficiently telemetrized signed OS to "your" computer if you are on the whitelist of not-yet-misinformation-spreaders?

fennecbutt 37 minutes ago | parent [-]

Well I suppose in that case it depends on how freedom loving TSMC (Taiwan) or ASML (Netherlands) want to be.

No chips for you, random government. No chips for you either, or you. And you.

sandworm101 8 hours ago | parent | prev [-]

+1 for local models. It also teaches users about how much energy they are using. One's perspective on 24/7 chatbots and agentic operating systems changes when you feel the heat coming from a rack of gpus.

(Spring is nearly here and my excuse about my rig also heating my house is about to end. Soon I will be paying extra to run my a/c as my rig pumps out a steady 1000w under load.)

1e1a 6 hours ago | parent [-]

You could use it to heat a tropical greenhouse.

gruez 11 hours ago | parent | prev [-]

Given the recent mexican telecom hacks were allegedly done with significant help from openai/anthropic's chatbots, it seems at least somewhat prudent to require some sort of identity verification for API access? I'm struggling to see how this isn't the tech community's version of "no background checks for gun purchases" or "no KYC for bank accounts".

totetsu 6 hours ago | parent | next [-]

Is api access really really so extreme that it's italics worthy? Technology should be available to us in other roles than just passive consumer using front ends that might not suit what we need, or work against us in some way. Already I am giving a credit card to openai to use the service, but in addition now I have to hand my government ID over to withpersona.com. who are they? who are their investors? will the leak my information accidentally/accidentally-on-purpose/on-purpose? Okay maybe Rick Song and Persona Identities are genuinely trustworthy, but what happens when someone wants an exit in the future and they merge with palantir and now when i generate a picture i have to worry about being added to a target list for some automated kamakazi drone kill-chain a-la black mirror. Or if this becomes standard practice .. maybe its not Persona Inc. but i have to vet dozens of these companies and it becomes too hard. Rather than guns, this is more like Identity verification for pipe purchases from the hardware store because one could use it got build a rocket.

paradox460 10 hours ago | parent | prev [-]

They were also likely done with keyboards and mice. Should we require id at point of purchase for those?

gruez 10 hours ago | parent [-]

Alright, so does that mean we don't need KYC for gun purchases or bank accounts either?

Of course you're probably going to say something about how guns and bank accounts are crucial components to crime, in which case the same holds for AI in the mexican telecoms hack.

roenxi 8 hours ago | parent | next [-]

> Alright, so does that mean we don't need KYC for ... bank accounts either?

That sounds reasonable. A bank can just be an institution that holds money for people; they don't need to be all over their customer's business. It is like a telecom not being responsible for what their customers say. In a simple sense banks don't need KYC.

sandworm101 3 hours ago | parent [-]

>> A bank can just be an institution that holds money for people

Nope. That is a storage locker. A bank uses the money it holds for other purposes such as loans or its own investments, possibly returning interest to the depositor. But, most importantly, a bank disperses money. it therefore needs to know who deposits what so that it doesn't eventually release funds to the wrong person. And then there are the lengthy procedures for handing out money without customer permission. People die. Governments garnish wages. Courts order payments to for child support. If you hold money you have to be prepared for this stuff. So you need to be absolutely confident in the identity of everyone you deal with.

Want a simple bank? A bank that doesn't ask for ID? Keep your cash under your mattress. Or put it all in a crypto wallet.

roenxi an hour ago | parent [-]

I don't think this makes sense. You seem to be saying that a bank has to do all these things to control criminals while simultaneously arguing that there are simple methods criminals could use to bypass the banks (ie, deal in cash and keep it under the mattress or use crypto).

Given that the criminals aren't going to be using the banks it would make sense for the banks to not have mandatory administrative overhead that is easy to avoid.

> Nope. That is a storage locker.

Again, sounds good to me. Let people have a storage locker with a plastic debit card attached. If people had the option of a bank that was a little bit more responsible and didn't roll the dice of total collapse every financial crisis there'd be many that would go for that. Prepper types for example. The discourse glosses over how crazy it is that full-reserve or near-full-reserve banks are soft-banned.

BobbyJo 10 hours ago | parent | prev | next [-]

What happens when everyone needs to use AI for their job? Genuine question that I think gets at the heart of the debate.

Once a common technology that everyone has access to becomes powerful enough to alter the lives of others on command, do we as a society just need to do away with the concept of anonymity? We are all just too powerful in isolation, and too much of a threat to the collective, that we cannot reasonably expect not to have some governing body watching at all times?

Today, you can buy parts/print a completely untraceable firearm, so do we license sales of steel tubing and 3D printers?

gruez 10 hours ago | parent [-]

>What happens when everyone needs to use AI for their job? Genuine question that I think gets at the heart of the debate.

Considering most places does direct deposit and that requires a bank account (so KYC), I don't see what's particularly new here. Many places also do background and/or work eligibility checks, which again is a form of KYC.

>Today, you can buy parts/print a completely untraceable firearm, so do we license sales of steel tubing and 3D printers?

Fortunately 3d printed guns are bad enough that it's not really an issue, although the bigger threat is probably CNC machines. However that's probably will get a pass, because they're eye-wateringly expensive compared to black market guns that nobody would bother.

AnthonyMouse 7 hours ago | parent [-]

> Considering most places does direct deposit and that requires a bank account (so KYC), I don't see what's particularly new here.

Slippery slope is a fallacy, they said.

> Many places also do background and/or work eligibility checks, which again is a form of KYC.

Except that it isn't KYC at all, both because employees aren't customers (most people are the employees of one company but the customers of hundreds or more), and because the majority of people don't have that requirement imposed on them by the government. There are many jobs you can get without a background check.

martin-t 9 hours ago | parent | prev [-]

Just yesterday I thought about the right middle ground for KYC when buying guns.

The issue with centrally registering guns is than when you country is taken over by hostile forces (whether an invading army or a democratically elected abuser who turns it into a dictatorship), they know who has the guns and can force those people to surrender them (politely at first, authoritarians always use a salami slicing technique).

The issue with no controls is that even anti-social and mentally ill people can get them.

I wonder if the right middle ground could be:

- Sellers have to do their due diligence - require ID, proof of psychological examination, whatever else is deemed the right balance.

- Not doing due diligence means they get punishment equal to that for any offense committed with that gun.

- They might be required to mark/stamp the gun so that it can be traced back to them or have witnesses for the transfer.

AnthonyMouse 6 hours ago | parent | next [-]

The arguments for background checks generally have to be split into two separate classes of people.

The first is the mentally ill. Intuitively it seems desirable to say that someone undergoing treatment for e.g. depression shouldn't buy a gun. The problem here is the massive perverse incentive. If you're pretty depressed but you're not inclined to forfeit your ability to buy firearms, you now have a significant incentive to avoid seeking treatment. At which point you can still buy a gun but now your mental illness is going untreated, which is very worse than where we started.

The second is career criminals, i.e. people who have already been convicted of a crime and want to commit another one. The problem here is that career criminals... don't follow laws. If they want a gun they steal one or recruit someone without a criminal record into their gang etc., both of which are actually worse than just letting them buy one.

On top of that, when people get caught, prosecutors generally try to get them to testify against other criminals in exchange for a deal, who are then going to be pretty mad at them. Which gives them a much higher than average legitimate need to exercise their right to self-defense once they get back out. And then you get three independent bad outcomes: If they can't defend themselves they get killed for snitching, if they acquire a gun anyway so they don't then they could go back to prison even if they were otherwise trying to reform themselves, and if they think about this ahead of time or are advised of it by their lawyers then they'll be less likely to cooperate with prosecutors because the other two scenarios that are both bad for them only happen if they snitch.

Meanwhile the proposal was only ever expected to address a minority of the problem to begin with because plenty of the people who do bad things can pass the background check. And if you have a policy that doesn't even solve most of the original problem while creating several new ones, maybe it's just a bad idea?

watwut 6 hours ago | parent [-]

Third, non career violent people. Domestic violence or other interpersonal viole ce should prevent you from having a gun. Regardless of whether you are career criminal

watwut 6 hours ago | parent | prev [-]

Personal guns have absolutely nothing with defense against "hostile forces'. That is pure fantasy.

Occasionally, gun owners are THE hostile force buying guns explicifely to bully and threaten. But that is about it, really.

rdevilla 8 hours ago | parent | prev | next [-]

I hope someone takes those Meta glasses or an Oculus or Apple Vision or something and hooks it up to clearview or some other facial recognition service and agentically scrapes OSINT sources to doxx people on the street, in real time.

One glance and I have your full name, home address, SSN, all online handles and aliases, employment history, email, and phone number, instantaneously on a HUD. It doesn't even need to be marketed as "doxxing as a service;" it can just be marketed as "professional networking" or "social media." That way people will voluntarily submit their information and all rights over it to the platform.

Until people feel their privacy being viscerally raped on a minute to minute basis nothing will change.

sandworm101 7 hours ago | parent [-]

My black-mirror prediction for how augmented reality and AI will interact: In order of horribleness.

1> Auto-nude. Today we can "nudify" photos and videos. Soon, augemented reality glasses will be able to nudify eveyone in real time. (This is totally possible today.)

2> Auto-tranlation. Cool. Everyone can talk to everyone, but users will have censorship options. I don't much like hearing australians so I will just have the glasses make them all sound like proper Texans. And the sound of people with alternative views to my own are replaced with calming country music.

3> Lie detection. Glasses will look for facial/voice body ticks suggestive of deception. Good luck talking your way out of a ticket, or explaining to you boss how you were "sick", when they have a lie detector online 24/7.

4> Censorship of "bad" objects. Signs with ads or news that I do not agree with will be blocked and replaced with more appropriate text. Mosques will appear as churches. Garbage and pollution will become happy birds and clear blue skies. Homeless people will be replaced with attractive young people (see #1 above).

5> Race replacement. I don't like certain races. So my glasses now make everyone Chinese. So long as I don't turn off the glasses, I can live my custom racist utopia.

foobar10000 an hour ago | parent | next [-]

All are indeed plausible- translation is iffy due to diarization not being all there yet - but why the specific order of horribleness?

Live translation seems either better than autonude or worse, but not in the middle of the pack I’d assume? Am I missing something here?

legacynl 26 minutes ago | parent | prev | next [-]

Lie-detection is not going to happen (for a long time). There are no known 'ticks' that can reliably detect lies. Even if there were, there is so much variability in individuals that there is basically no way to find a generalized way of telling if somebody is lying.

rdevilla 7 hours ago | parent | prev [-]

This is great. I finally feel for the first time in my life that science has in fact gone too far. At this point living in the so-called "third world" to avoid digital-rape-as-a-service and the ever increasing pace of technology sounds eminently reasonable.

legacynl 24 minutes ago | parent | next [-]

Let's be nice to science here. Machine learning was the science. All this bad shit that has followed is purely the fault of capitalist companies.

sandworm101 7 hours ago | parent | prev [-]

I forgot about lip reading. Lots of possible evils if glasses can read lips.

BlackFly 7 hours ago | parent | prev | next [-]

An account level flag in a user account on an operating system is the opposite of verified identification. It is self assertion by the owner of the computer: the parent. If such a control works in the same way as enterprise supervision the child won't be able to install a vpn, or other software to bypass the control.

12 hours ago | parent | prev | next [-]
[deleted]
laughing_man 6 hours ago | parent | prev | next [-]

Yeah, none of this is about children. "Think of the children" is just a means to an end, and most likely what we'll find is even when we lose all pretense of anonymity somehow the kids will figure out a way to get access.

hypeatei 2 hours ago | parent [-]

> somehow the kids will figure out a way to get access.

This is what they want to happen with the initial round of "it's just a DOB field bro" legislation. It'll be completely useless, easy to bypass, and annoying to adults. But, everyone will be warming up to this government mandated prompt in their OS. Perfect, now legislators know they have a foundation to work with to introduce "reasonable" amendments to this prompt that require you to upload ID, for example. Frogs in a pot.

SarahC_ 9 hours ago | parent | prev | next [-]

IMAGINE A WAR.

Now - wouldn't a government LOVE to know who's saying what? Rather than shutting down the entire $$$$$ international corporate internet.

Money concerns as usual.

mondomondo 2 hours ago | parent | prev | next [-]

[dead]

simonask 14 hours ago | parent | prev | next [-]

[flagged]

kdheiwns 11 hours ago | parent | next [-]

I feel like I see these comments basically verbatim and it's freaking me out. The whole "I share your concerns, but hear me out: anonymity is bad." It's basically identical wording every single time.

I think people who say this should back it up by posting their full name, date of birth, SSN or other ID number, and address. A phone number would also be helpful so we can call and verify that they made the post. Otherwise they're not being honest.

try_the_bass 8 hours ago | parent | next [-]

> I think people who say this should back it up by posting their full name, date of birth, SSN or other ID number, and address. A phone number would also be helpful so we can call and verify that they made the post. Otherwise they're not being honest

But this isn't (intellectually) honest, either?

Maybe you can justify asking that they post under their real name, but asking for the kind of information that's required to steal their identity isn't the same as asking them who they are.

simonask 4 hours ago | parent | prev | next [-]

Never once did I say that "anonymity is bad", but people in this thread and piling on as if that's what I said. I said there are drawbacks, and that those drawbacks are real.

8 hours ago | parent | prev [-]
[deleted]
fc417fc802 11 hours ago | parent | prev | next [-]

> Online anonymity has significant, real-world drawbacks.

Do please be specific about those. Provide concrete examples and justify for the class why those involved couldn't have voluntarily done away with anonymity for that particular interaction.

Hypothetically someone can browse a tor site in one tab, post on 4chan in a second one, all while accessing online banking in a third. The bank can use hardware backed 2FA to verify you. Where's the issue here?

simonask 4 hours ago | parent [-]

> Do please be specific about those.

Here is one example: It's likely that we will never know who was behind the attempted backdoor in the xz library, which was almost successful in making a huge number of Linux installations worldwide vulnerable to remote exploitation. [1]

That malicious contributor is protected by online anonymity. Now, I know that it's probable that a state actor was behind "Jia Tan", meaning they could have been supplied with a fake ID as well, but that's still a higher barrier.

I don't think (and have not stated) that anonymity is worthless - it definitely is, especially if you're persecuted minority or under other kinds of threat. I just don't think it's helpful to pretend that it is completely unproblematic.

[1]: https://tukaani.org/xz-backdoor/

fc417fc802 13 minutes ago | parent [-]

> > and justify for the class why those involved couldn't have voluntarily done away with anonymity for that particular interaction.

The project in question could have chosen to verify identities if they deemed it worthwhile to do so.

novok 13 hours ago | parent | prev | next [-]

When financial institutions in the USA are not even adding basic things like... approve transaction on phone, keeping most things pull based based on knowing a few magic numbers vs. push based and other really basic things, this really doesn't hold water. Things being anon doesn't even register in the day to day of what is bad with the internet, vast majority of it is from very non-anonymous sources, influencers, apps or institutions.

j16sdiz 11 hours ago | parent [-]

In many other countries, these are enforced by central bank, bank association or legislations.

In USA, small business, small bank and credit unit are often used as excuse to push back these kind of rules.

Aurornis 13 hours ago | parent | prev | next [-]

> An information leak 30 years ago was bad, but it had a fairly limited impact radius. Today it can lose you your house, your savings, your relationships, and even your life ("swatting" comes to mind).

So you are afraid of minor information leaks getting you killed, but you’re also trying to tell us that online anonymity is a bad thing?

Come on. This argument isn’t even coherent from paragraph to paragraph.

> I don't think it's reasonable to keep dreaming of the 90s or 00s when the internet was a comparatively innocent place

This is such a strange argument as the internet was most definitely NOT an innocent place, even relatively speaking, in that period.

I think there is a lot of nostalgic history rewriting in these claims. Much like political movements that claim that the past was a better time, it’s easy to only remember the good parts of how things were in the past.

simonask 12 hours ago | parent [-]

[flagged]

Aurornis 12 hours ago | parent [-]

> I neither believe nor did express any of the opinions you accuse me of.

I directly quoted your beliefs that minor information leaks on the internet can lose your house and get you killed, as well as your claim that the internet was significantly more innocent in the past.

These were the points you were putting forward along with your insistence that we have to “be real” about the problems of anonymity on the internet.

Its hard for me to believe that you don’t recognize the dissonance between the two points you were putting forward.

Your silly “Are you an American” attempt at an insult or rebuttal reveals the level of conversation you’re having, though.

simonask 4 hours ago | parent | next [-]

You said:

> So you are afraid of minor information leaks getting you killed, but you’re also trying to tell us that online anonymity is a bad thing?

Which is a really severe misrepresentation of my argument.

My argument is that anonymity has drawbacks, and that it's bad to just ignore those drawbacks.

> Its hard for me to believe that you don’t recognize the dissonance between the two points you were putting forward.

But there absolutely is a dissonance? This is what's called a dilemma: Online anonymity protects some people, and puts other people at risk. If competent people ignore the latter, incompetent people will be trying to solve it instead, so we get these laws.

> Your silly “Are you an American” attempt at an insult or rebuttal reveals the level of conversation you’re having, though.

Sorry about the accusation, it was somewhat flippant. It just seems you and others read an opinion that goes slightly against your own, and immediately you assume that I actually hold the polar opposite opinion, which I don't.

RajT88 11 hours ago | parent | prev [-]

He was definitely trying to make a point, and then immediately undercut it. It is not just you.

ux266478 12 hours ago | parent | prev | next [-]

> As society is more and more digitized

How about this is actually the real problem? Online banking is not worth an omniscient global surveillance state, let alone the immense amount of leverage gained by this digitization.

voidfunc 12 hours ago | parent [-]

Theres no putting that genie back and most people wouldn't want to.

scotty79 13 hours ago | parent | prev | next [-]

> Online anonymity has significant, real-world drawbacks.

Online anonymity has significant, real-world benefits which every doxxed person ever will list for you.

gzread 12 hours ago | parent [-]

And drawbacks, too. Imagine if you could only dox someone else by doxxing yourself at the same time.

drdeca 11 hours ago | parent [-]

I don’t think that is really a sufficient defense? The amount of focus pointed at the person matters for this.

mindslight 13 hours ago | parent | prev [-]

> Sticking our head in the sand crying "git gud" while millions get scammed out of their life savings...

The solution is called a durable power of attorney and then moving significant assets to different financial institutions with e-statements. Or the heavyweight option is a living trust.

Mandatory identity verification or locking down software really have no bearing on this problem. Scammers leverage generic apps in the app stores just fine.

This problem most certainly is a part of the global turn towards fascism, which is ultimately based on frustrated people demanding easy answers and then empowering those who are able to give them easy answers by lying to them.

simonask 12 hours ago | parent [-]

Perhaps the first step is to actually listen to the frustrated people. Maybe at least some of their problems are real.

mindslight 12 hours ago | parent [-]

I've definitely listened to the frustrated people, as well as even sharing many of their frustrations. And their (our) problems are definitely real. I still stand by what I said.

To show you that I'm maybe not just blowing smoke out of my ass on this topic, here is me personally dealing with a scammer-adjacent problem: https://news.ycombinator.com/item?id=47125550

Buttons840 11 hours ago | parent | prev | next [-]

Somehow they will eliminate anonymity for real people, but bots will still be pushing Russian or... some other country's interests with massive bot farms.

owisd 14 hours ago | parent | prev [-]

If the end goal was user identification then the digital ID + zero knowledge proof age verification methods would be disallowed, which they aren't. https://blog.google/products-and-platforms/platforms/google-...

mindslight 13 hours ago | parent [-]

You got suckered by the marketing. Google's "zero knowledge" approach requires devices locked down with remote attestation, which prohibits end users from running their own code (when interacting with websites that prevent it, which as time goes on under this plan will be everywhere). The only actual difference here is that this is Google's desired approach to destroying anonymity and personal computing.

remcob 13 hours ago | parent [-]

Why is that required? The whole point of zero knowledge proofs is that it can run on untrusted devices.

Aurornis 12 hours ago | parent | next [-]

Because true “zero knowledge” proofs are actually useless for age gating purposes.

Conceptually, if a proof was truly zero knowledge and there were no restrictions on generating it, there would also be nothing stopping someone from launching a website where you clicked a button and were given a free token generated from their ID. If it was truly a zero knowledge proof it would be impossible to revoke the ID that generated it, so there is no disincentive to freely share IDs.

So every real world “zero knowledge” proof eventually restricts something. Some require you to request your tokens from a government entity. Others try to do hardware attention chains so theoretically you can’t generate them outside of the approved means.

But the hacker fantasy of truly zero knowledge proofs is impossible because 1 hour after launch there would be a dozen “Show HN” posts with vibe coded websites that dispense zero knowledge tokens.

AnthonyMouse 7 hours ago | parent [-]

It's also unclear what they'd even be useful for to begin with.

You need some kind of proof system if you need a central authority to certify something, but why is that required? The parents know the age of their kids. They don't need the government to certify that to them. And then the parents can get the kids a device that allows them to set age restrictions.

Whether those restrictions are imposed by the device on content it displays (which is the correct way to do it) or by the device telling the service the approximate age of the user (which needlessly leaks information), you don't actually need a central authority to certify anything to begin with because either way it's just a configuration setting in the child's device.

gbear605 12 hours ago | parent | prev [-]

You’d have to ask Google