> of observing the stated goals, which differ from the purported goals.
The problem is precisely that it doesn't show that. The Online Safety Act is, on this public explainer, described as legislation that provides protections to multiple groups. What they say in paragraph two is that "the strongest protections" are offered to children, while paragraph three then calls out that "The act will also protect adult users".
What is described is a tiered set of protections that at its lowest protects everyone (including adults), and a set of more narrow protections that are only extended to children. It follows quite logically that you will only need to know the users age if you want to show content to adults that you are not allowed to show children.
The "categorization" they are discussing is another axis of "tiering". Smaller provides (in categories 2A and B) are imposed less duty of protection, according to the explainer to account for their "size and capacity".
With this context. I think it's quite clear that the comments about the targeting of Category 1 are completely pedestrian. It isn't supposed to apply differently to PornHub and Amazon, because both are large multinationals that have enough resources to uphold their imposed duty.
For this to reveal anything nefarious about age verification, it would have to be about the designations of "Primary Priority Content" and "Priority Content" which are the types of content you are allowed to show adults, but not children.
It is all intensely boring, so I can't blame the news from not wanting to cover it, but it is exactly the type of context you have to include when making quite extraordinary accusations of misleading the public.