Remix.run Logo
hinkley 3 hours ago

I get so many Second System Syndrome vibes off of IPv6. Surely other people must be picking it up too.

Future proofing it by jumping straight to 128 bits instead of 64. 64 would have been fine. Even with a load factor of 1:1000 by assigning semantics to ranges of IP addresses, 64 bit addressing is still enough addresses for 10 million devices per person.

If we become a galactic empire, we will have to replace the Web anyway because every interaction will have to be a standalone app or edge networking that doesn’t need to hear back from the central office for minutes, hours, days anyway. We could NAT every planet and go on forever.

GuB-42 2 hours ago | parent | next [-]

The point is not really to support a galactic empire, the idea is that you have a network part and an interface part, each is 64 bits. The "network" part is used by routers, the interface part is to identify the device on the endpoint. Each interface have an identifier that is world unique (usually based on the MAC address), each network is also unique. Usually, your ISP gives you a /48 prefix, so you have 16 bits for potentially 64k internal networks. This way, you don't need something like DHCP to get an address, you just take it and you won't have conflicts.

But because you have two independent unique parts, you need twice as many bits, so 64+64=128 bits. It simplifies routing and address allocation, at the cost of 16 bytes per packet compared to 64 bit addresses.

That we could use IPv6 on galactic empires is an added bonus, but not really the reason.

system2 an hour ago | parent [-]

Bypassing the router to get to the device directly via IP sounds like insanity. Like a forever-open port.

GuB-42 7 minutes ago | parent | next [-]

You are not bypassing the router, the devices need to get their packets from somewhere, and it is only like a forever-open port if the router/firewall decides it is.

My ISP router supports IPv6 but blocks all incoming connections by default, which is kind of like what NAT does as a side effect.

It sounds like insanity because we tend to assume that no NAT means no firewall, because NAT has some firewall-like properties, and on the most basic networks, that's the only "firewall" there is. But none of the security features of "NAT as a firewall" are exclusive to IPv4, in fact, IPv6 has an advantage because the much larger address space makes a full scan practically impossible.

hdgvhicv 40 minutes ago | parent | prev | next [-]

Anti nat advocates seem to fall into the “the network shouldnt provide a stateful firewall” camp, because once you have a stateful firewall then nat is a trivial amount of extra bytes and very few issues with modern protocols (ones which don’t embed layer 3 addressing in layer 6/7 messages)

tekne 24 minutes ago | parent [-]

I like the end-to-end principle. Good times.

krupan an hour ago | parent | prev [-]

It's a big privacy problem too. Basing your IP address on your Mac address doesn't help in that regard either. Times have changed a lot since IPv6 was invented.

sgjohnson 22 minutes ago | parent [-]

> Basing your IP address on your Mac address doesn't help in that regard either.

This hasn’t been the case for 20 years. Privacy Extensions solved that, and every SLAAC implementation supports them.

PunchyHamster 3 hours ago | parent | prev | next [-]

> Future proofing it by jumping straight to 128 bits instead of 64. 64 would have been fine. Even with a load factor of 1:1000 by assigning semantics to ranges of IP addresses, 64 bit addressing is still enough addresses for 10 million devices per person.

128 bit is like the least of adoption issues and basically meaningless difference vs 64.

But it shows weird priorities when they decided 128 then immediately wasted half of it on host part just to achieve "globally unique" host part that isn't really all that useful characteristic of the protocol.

sedatk 2 hours ago | parent | next [-]

> to achieve "globally unique" host part that isn't really all that useful characteristic of the protocol.

That's the essential part of self-configured addresses in IPv6 that does away with DHCP in most cases. DHCP is a stateful system that has to track every device's addresses individually. You don't need that with IPv6 thanks to this.

rmwaite 2 hours ago | parent | prev | next [-]

IP addresses were always meant to be globally reachable. Of course, NAT has corrupted this - which is why NAT is a scourge.

hdgvhicv 39 minutes ago | parent [-]

And so are firewalls?

convolvatron 23 minutes ago | parent [-]

firewalls are a choice that the enduser makes.

non-routed prefixes are a limitation imposed by the ISP the the user can't address.

api 3 hours ago | parent | prev [-]

I kinda think we could fix/save IPv6 by taking away almost everything but the 128-bit address extension.

vasco 2 hours ago | parent [-]

The truth is nothing needed fixing, or we wouldn't have been in this position 30 years later

patmorgan23 2 hours ago | parent [-]

Disagree. APINIC got screwed on the IP allocation side, they're the RIR with the largest population but they have a tiny amount of IPs compared to ARIN. India and China have billions of people and not enough v4 space for them. If we go back and reallocate legacy blocks maybe you could make the system work but that would be a big fight with the legacy networks.

v6 restores the end-to-end principle and reduces network complexity once you go v6 only. Not more NAT traversal problems, no need to deal with STUN/TURN, small networks get even simpler with no need for a statefull DHCP server.

Sticking with only v4 space also artificially increases the cost of starting new networks and services because you have to buy space from the entrench IP save owners (unless we change the rules are start charging fees to legacy networks and reclaiming unused or poorly utilized space). Those higher barriers to entry hurt innovation and competition.

So v6 solves several technical and policies issues with the Internet, and maybe that's why we haven't seen speedy adoption. Because people have networks that exist today, some have paid a lot of money for IPv4 space and they want to make the most of that investment.

They don't really have an incentive to implement V6 unless things start to break without it.

I don't think v6 has been a failure half of all internet traffic runs on it! It powers the major cell phone networks, and large tech companies like meta have even gone v6 only in their data centers.

cm2187 2 hours ago | parent | prev | next [-]

Don't think the problem is 64 vs 128. I don't think the problem is end users either, the vast majority of which don't even know what the IP protocol is in the first place (nor should they). The fault I think is on ISPs.

I use hyperoptic in the UK, if you replace the original router (which reserves the external 443 port for itself, i.e. no one sophisticated would keep it), there seems to be no way to get a v6 address. This is pure incompetence and carelessness. Like ISPs allowing their network to send packets spoofing IPs from outside their network. Add to that foreign ISPs (which means that even if your own network supports v6, you need v4 support when you are on holidays/travelling), and you have a situation where v4 cannot simply be switched off.

So for a website, what is the point of supporting v6 if v4 is never going away?

tptacek 2 hours ago | parent | prev | next [-]

It's understandable that IPv6 would be ambitious rather than incremental given the cost of rolling out a new protocol; the bells-and-whistles IPv6 design is probably just a relatively small constant factor more expensive than the simplest possible address space expansion. Viewed that way, you only get the one chance to update the protocol, you might as well fix whatever you can.

yuvadam 3 hours ago | parent | prev | next [-]

how would you do SLAAC with 64 bits?

hinkley 3 hours ago | parent | next [-]

Was DHCP so bad? It carries information important to using such a device anyway.

DaSHacka 3 hours ago | parent | next [-]

+1, the majority of corporate networks I have seen used DHCPv6 or similar anyway

convolvatron 2 hours ago | parent | prev [-]

well, its not without issues. the actual motivation was not that dhcp is the suxxors, but to promote a model where the assigned prefix was free and highly dynamic.

the goal being to support a model where one could support multiple prefixes to handle the common case of multiple internet connections. more importantly to allow providers to shuffle the address space around without having to coordinate with the end organization. this was perceived to be necessary to prevent the v6 address space from accruing segmentation.

api 3 hours ago | parent | prev | next [-]

You don't, and that's fine.

cyberax an hour ago | parent | prev [-]

The same way you do it now. The router announces a prefix, and devices negotiate unique addresses.

Keep in mind that SLAAC isn't. Modern IPv6 stacks use privacy addresses, so they still need to run the address collision detection.

There's also a proposal to have SLAAC with longer prefixes, because otherwise you need to use DHCP-PD if you want to have subnetting in IPv6.

delusional 3 hours ago | parent | prev [-]

> Future proofing it by jumping straight to 128 bits instead of 64.

It's hard to disagree with your point since 64 would definitely have been better than the 32 we have. I'm not convinced the choice of going for 128 bits posed any real challenge to adoption though.

hinkley 3 hours ago | parent [-]

The irony that I forgot to voice is that if we had gone 64 and feeder features we’d be farther along in adoption now and probably be consuming the address space at least a fraction as fast as people feared.

By raising the barrier to entry so high we guaranteed the features would likely never be needed.

xpltr7 an hour ago | parent [-]

They did have a proposal for 64bit...was ipv7.

p_l 23 minutes ago | parent [-]

They also had IPv9 with 20 byte addresses (160 bits) though some of that was consumed for common prefix announcing "this is a TUBA address". It was even something that was already supported by some hardware and software, as it was just dropping IP and replacing it with CLNP and transporting TCP and UDP over it (I think the most complex part was adapting ICMP-based tools).