Remix.run Logo
reddalo 3 hours ago

And that's also the reason why they introduced "age verification". It's not age verification, they couldn't care less about children.

Age verification is just mass surveillance under a fake name.

3 hours ago | parent | next [-]
[deleted]
ogogmad 3 hours ago | parent | prev [-]

[flagged]

auggierose 3 hours ago | parent | next [-]

Just because you are paranoid doesn't mean they aren't after you.

eipi10_hn 3 hours ago | parent | prev | next [-]

Your comment is psychotic too.

gambiting 3 hours ago | parent | prev [-]

Yeah except that Ofcom(the UK communications regulator) already said that the main goal of the Online Safety Act isn't about protecting children, it's about "controlling online discourse". They dropped that pretense literally one day after the act got passed.

>>I am getting very intolerant of these conspiratorial comments

Weird thing to brag about, but sure.

ravenical 3 hours ago | parent | next [-]

Source, please?

JoshTriplett 2 hours ago | parent [-]

https://bsky.app/profile/tupped.bsky.social/post/3lwgcmswmy2...

"officials explained that the regulation in question was 'not primarily aimed at ... the protection of children', but was about regulating 'services that have a significant influence over public discourse'".

delusional 2 hours ago | parent [-]

Isn't this presentation disingenuous? The act is called the "Online safety act" and the quote isn't about the "regulation" in its entirety but about what constitutes a "Category 1" service. Described in an official explainer, meant for the public, as "Large user-to-user services" under the heading of "Adults will have more control over the content they see"[1].

It's not clear to me that this is some nefarious underhanded technique. The secretary of state asked why non-porn sites were included in Category 1, and was told that Category 1 wasn't intended to catch porn sites, but is intended to apply to "Large user-to-user services", in line with public communication from the government.

I don't think anybody is under any illusion that "Adults will have more control over the content they see" is intended to protect children.

[1]: https://www.gov.uk/government/publications/online-safety-act...

JoshTriplett an hour ago | parent [-]

This presentation seems entirely reasonable for the purposes of observing the stated goals, which differ from the purported goals. The act is being pitched as a means of "protecting children", which is also the mechanism making it harder for people to argue against it. It is entirely reasonable for people to observe that in practice the government is intending to use it to control online discourse.

Nursie an hour ago | parent | next [-]

The part of the act they are talking about seems to be concerned with content recommendation systems, not proof of age.

The original framing of the quote in that blue sky thread is highly misleading as a result.

delusional 23 minutes ago | parent | prev [-]

> of observing the stated goals, which differ from the purported goals.

The problem is precisely that it doesn't show that. The Online Safety Act is, on this public explainer, described as legislation that provides protections to multiple groups. What they say in paragraph two is that "the strongest protections" are offered to children, while paragraph three then calls out that "The act will also protect adult users".

What is described is a tiered set of protections that at its lowest protects everyone (including adults), and a set of more narrow protections that are only extended to children. It follows quite logically that you will only need to know the users age if you want to show content to adults that you are not allowed to show children.

The "categorization" they are discussing is another axis of "tiering". Smaller provides (in categories 2A and B) are imposed less duty of protection, according to the explainer to account for their "size and capacity".

With this context. I think it's quite clear that the comments about the targeting of Category 1 are completely pedestrian. It isn't supposed to apply differently to PornHub and Amazon, because both are large multinationals that have enough resources to uphold their imposed duty.

For this to reveal anything nefarious about age verification, it would have to be about the designations of "Primary Priority Content" and "Priority Content" which are the types of content you are allowed to show adults, but not children.

delusional 3 hours ago | parent | prev [-]

Would you mind linking to where you got that "controlling online discourse" quote. I am not able to find anything like that.

Nursie an hour ago | parent [-]

It comes from an article in the times - https://archive.ph/2025.08.13-190800/https://www.thetimes.co...

However the context is highly misleading, as in the original context it appears to be in reference to parts of the act that deal with content recommendation, not parts that deal with age verification -

https://news.ycombinator.com/item?id=44910161

But as usual, that no longer matters in online discourse, it forms a soundbite that backs up the preconceptions of one side of an argument, that the whole exercise is nefarious, so it doesn’t matter if it’s actually true.