Remix.run Logo
mwexler 21 hours ago

Finally. I will now be able to run ~340 undecillion devices on my home network. I'll have the smartest "smart home" on the block.

dwedge 21 hours ago | parent | next [-]

I never understood the rationale of giving out /64 and /48 like candy after what happened with ipv4. I know it's still a massive increase in capacity and I know it makes the networking easier but it seems like we went from something that definitely won't run out (ipv6 addresses) to something that probably won't (number of /48 ranges)

teraflop 20 hours ago | parent | next [-]

I can think of at least two reasons why this isn't worth worrying about.

One is quantitative: you have to remember that 2^48 is a much much bigger number than 2^32. With 2^32 IPv4 addresses, you have about 0.5 addresses per human being on the planet, so right away you can tell that stringent allocation policies will be needed. On the other hand, with 2^48 /48 ranges, there are about 8,000 ranges per human being.

So even if you hand a few /48s out free to literally everyone who asks, the vast majority will still be unallocated. A /48 is only about 0.01% of what could be said to be a "fair" allocation. (And yet, a /48 is so huge in absolute terms that even the vast majority of organizations would never need more than one of them, let alone individuals.)

The other is that unlike, say, the crude oil we pump out of the ground, IP address ranges are a renewable resource. If you hand out a free /48 to every person at birth, then long before you start running out of ranges, people will start dying and you can just reclaim the addresses they were using.

ndriscoll 21 hours ago | parent | prev | next [-]

/48s are "small" enough that we could give ~8 billion people each 35,000 of them and we'd still have ~1.5 trillion (over 300x the size of the ipv4 space) left over. Addresses are basically infinite, but routing table entries (which fragmentation necessitates) have a cost.

tosti 20 hours ago | parent [-]

Current recommendation (afaict ianant) for ISPs is to give everyone a /56. Not every isp does that, ofc

flumpcakes 20 hours ago | parent | prev | next [-]

In IPv6 the smallest 'subnet' is /64 if I recall correctly.

It's weird having a subnet size equal to a complete IPv4 Internet worth of IPv6 Internets but I believe the rationale was that you would never in practise run of out IPs in your subnet. A lot of Enterprise IPv4 headaches are managing subnets that are not correctly sized (organic growth, etc.). IPv6 is always routable for the same reason (companies reusing RFC1918 making connecting networks a pain).

There are different headaches with IPv6 - such as re-IPing devices if they move subnet - i.e. move physical location, or during a failover etc.

I'm not sure what the best practise there is as many enterprises don't use IPv6 internally. In my experience anyway.

quietsegfault 17 hours ago | parent [-]

The big issue I see is every enterprise has a solid numbering plan for RFC1918 networks. Unfortunately, many of them have the SAME plan, and when peering networking between SaaS vendors and customers was more popular (now, of course, privatelink seems to be the move) we constantly ran into conflicts. There's still the risk of conflict with IPv6, but I think if numbering decisions are made thoughtfully, they can be avoided.

Dagger2 15 hours ago | parent [-]

There's no risk at all if you're using your own allocated prefix, because those are managed by IANA/RIRs/LIRs to not overlap.

Incidentally, if you find yourself experiencing an RFC1918 clash, one simple way of fixing it is to use NAT64 to map the remote side's RFC1918 into a /96 from your v6 allocation. You can write the last 32 bits of a v6 address in v4 format, so this leads to addresses like 2001:db8:abc:6401::192.168.0.10 and 2001:db8:abc:6402::192.168.0.10, which don't overlap from your perspective.

(If you wanted something simpler to type you could put them at e.g. fd01::192.168.0.10... but then you do start running the risk of collisions with other people who also thought they could just use a simple ULA prefix.)

wolvoleo 20 hours ago | parent | prev | next [-]

Yes. I wish they had simply used a more sane address length instead, and maybe given everyone 65535 addresses at most. More than enough for the craziest home lab ever.

Really, just adding 2 bytes to IPv4 would have fixed everything and made it a lot simpler to move over. IPv6 is overkill and I think that really hurt its adoption. I remember being at uni and being told "this is the next big thing". In 1993. And it's not even a big thing now. Not on the user side anyway, I can still access everything from IPv4.

Dagger2 16 hours ago | parent | next [-]

Adding two bytes would have been just as much work as adding 12 bytes, and would have left us with too few addresses rather than too many. The MAC address space is now 64 bits and L3 is necessarily less dense than L2, so 128 bits is the smallest power of 2 where we can be reasonably sure we won't end up with too few addresses.

Considering how hard deploying a new L3 protocol is, we're only going to get one shot at it so it's a lot better to end up with too many addresses rather than too few.

wolvoleo 16 hours ago | parent [-]

Ehm but IPv6 packets still have the L2 layer as well right? Which already includes the MAC address. So that 64 address MAC space is doubled, it's not like you're saving any. It was a pretty arbitrary decision to accommodate the MAC address inside the IPv6 address and these days it's usually randomised anyway for privacy purposes, so the MAC part of an IPv6 packet doesn't have to be the size of the MAC address.

L3 has nothing to do with MAC addresses anyway so I've always found that a pretty weird decision anyway. Sure, it avoids having to implement ARP but we need that again now anyway with the randomisation. And ARP is like a one-time in a few minutes kinda thing anyway.

I'm pretty sure that if we'd just gone for "a couple bytes extra" we'd have long been completely over. It's the whole L3 transition itself that suffers from the complexity. I remember it well in the 2000s, nobody in telecoms wanted to touch it. And when IPv6 was invented in '93 or so, the installed base was extremely small. It'd have been a piece of cake to get it over with then.

hrmtst93837 19 hours ago | parent | prev [-]

[dead]

boredatoms 20 hours ago | parent | prev | next [-]

If we actually get to the point of address shortages,

Either, NATv6 would become a thing, or instead I hope SLAAC would get deprecated and dhcpv6 would become mandatory so we could give out smaller than /64s

ndriscoll 20 hours ago | parent [-]

2^64 is 18,446,744,073,709,551,616. That's 18 quintillion. 10^19. There are ~10^10 people on the planet. Each person could have a 10^9 networks (not even devices) before we ran out of /64s.

bluGill 20 hours ago | parent | prev [-]

/48 because ethernet mac addresses are that length and so you can assign everything that and find it.

mrsssnake 20 hours ago | parent | prev [-]

> able to run ~340 undecillion devices on my home network

You now can have these devices connected to network called Internet.

Unlike IPv4 were the number of devices on the Internet in home network is one (the main router) or zero (in case if CGNAT) and the others just pretend.