Remix.run Logo
craftkiller 2 hours ago

It still helps. I have a 1U in a colo which gives me a /64 for ipv6 and ~5 addresses for ipv4. I just set up a dual stack kubernetes cluster on 6 virtual machines. When I want to ssh into one of the machines, my options are either:

  1. Use IPv6 which works and goes directly to the virtual machine because each virtual machine grabs its own address from one of my 18446744073709551616 addresses.
  2. Use IPv4 and either have to do a jumphost or do port forwarding, giving each virtual machine its own port which forwards to port 22 on the virtual machine.
  3. Use a VPN.
I have all 3 working, but #1 was significantly less setup and works the best.

Also being able to generate unique ULA subnets is super nice.

wolvoleo an hour ago | parent [-]

Really using port 22 is very ill advised anyway because you will get constant nuisance brute force attacks (accomplishing nothing because you're using keys or certificates I hope) but still eating up cycles for the crypto handshake.

craftkiller 31 minutes ago | parent | next [-]

By that same logic, using IPv4 is ill-advised because I could easily give the ssh endpoints their own IPv6 addresses, avoiding the need to hide behind non-standard ports. Scanning through 18446744073709551616 addresses is going to be a lot slower than scanning through 65536 ports.

Dagger2 35 minutes ago | parent | prev [-]

Really? I get somewhere in the region of none to barely any, depending on the server.

I mean, yes, you'll get a constant stream of them on IPv4, but why would you run a server on v4 unless you absolutely needed to? The address space is so small you can scan every IP in 5 minutes per port, and if you have my v4 address you can enumerate every single server I'm running just by scanning 65k ports.

Meanwhile, on v6, even the latter of those takes a thousand years. How would people even find the server?