| |
| ▲ | Thaxll 2 days ago | parent | next [-] | | Even in the same city most people won't have 7ms, not sure where that number comes from. | |
| ▲ | skhr0680 2 days ago | parent | prev [-] | | Isn't ping mostly determined by the physical distance between you and the server? | | |
| ▲ | burnt-resistor 2 days ago | parent | next [-] | | Sort of, but not precisely. In practice, it really depended on the slowest link's maximum single-channel bandwidth*, oversubscription ratio of the backhaul(s), router equipment and configuration like QoS/packet prioritization... and then it also depended on internet traffic at the particular time of day. In my case, I was 3-4 hops away and 34 mi / 55 km straight line distance, 110 / 177 driving, and most importantly roughly around 142 / 230 of cable distance approximately by mapping paths near highways in Google Earth. I doubt the network path CalREN/CENIC was used because it never showed up in hops in traceroute (although there was nothing preventing intermediaries from encapsulating and transiting flows across other protocols and networks), but it definitely went through PAIX. * Per technology, zero-distance minimum delay is a function of the single maximum channel bit rate and data size + lower layer encapsulating protocol(s) overhead which was probably UDP + IP + 1 or more lower layers such as Ethernet, ATM, ISDN/frame relay BRI/PRI, DSL, or POTS modems. With a 1 Gbps link using a billion 1 Hz|baud channels, it's impossible to have a single bit packet latency lower than 1 second. | |
| ▲ | shdh 2 days ago | parent | prev | next [-] | | Due to how most FPS games are implemented you are actually seeing other entities in their past state. What happens is the game will buffer two "snapshots" which contain a list of entities (players, weapons, throwables, etc.) and it will linear interpolate the entities between the two states for a certain period of time (typically the snapshot frequency). The server might have a "tick rate" of 20, meaning a snapshot is created for a client every 50ms (1000ms / 20 tick rate). The client will throw that snapshot onto a buffer and wait for a second one to be received. Once the client is satisfied with the number of snapshots in the buffer, it will render entities and position them between the two snapshots. Clients translate the entities from the position of the first snapshot to the position of the second snapshot. Therefore even with 5ms ping, you might actually be seeing entities at 55ms in the past. | |
| ▲ | fragmede 2 days ago | parent | prev [-] | | Depends on the location of you and the server. Obviously it can't go faster than the speed of light, but it can go much slower, since it doesn't go add the crow (or photon) flies, and more of in a zig zag path, and it has to traverse several photon/electron translation hardware hops (aka routers and switches), and there's typically some packet loss and buffer bloat to contend with as well. The speed of light in fiber is slower than in a vacuum, to be fair, but the latency you experience is marred more, not by the raw speed of the photon in fiber, which is still quite fast (certainly faster than you or I can run), but by all the other reasons why you don't get anywhere near that theoretical maximum speed. From me to Australia should be ~37 milliseconds if we look at the speed of light, but it's closer to 175 milliseconds (meaning a ping of ~350). Nevermind the latency of being on wifi adds to that. https://www.pingdom.com/blog/theoretical-vs-real-world-speed... | | |
| ▲ | Hikikomori 2 days ago | parent [-] | | There are many problems with this article, it's for laymen so simplifies things but it's also factually incorrect. Fiber is usually next to roads or railways, which usually do not zigzag. Modern router/switches have a forwarding delay of micro/nanoseconds. The beam in a single mode fiber does not bounce like a pinball, it doesn't bounce at all, hence the name. Ping is largely a product of distance and the speed (200km/s). It's not the distance a bird would fly but it can be close to it sometimes. And then the internet is a collection of separate networks that are not fully connected, so even if your target is in the next building your traffic might go through another bigger city as that is where the ISPs peer. | | |
| ▲ | burnt-resistor 2 days ago | parent [-] | | You're still missing many other significant factors besides distance. There are many conditions that affect latency, but on the minimum theoretical value possible, it's mostly dominated by the slowest path technology's single channel bandwidth. The other factors that reduce performance include: - Network conditions - High port/traffic oversubscription ratio - QoS/packet service classification, i.e., discriminatory tweaks that stop, slow, or speed up certain kinds of traffic contrary to the principles of net neutrality - Packet forwarding rate compared to physical link speed - Network gear, client, and server tuning and (mis)configuration - Signal booster/repeater latency - And too many more to enumerate exhaustively As such, point-to-point local- and internet-spanning configuration troubleshooting and optimization is best decided empirically through repeated experimentation, sometimes assisted by netadmin tools when there is access to intermediary infrastructure. | | |
| ▲ | Hikikomori 2 days ago | parent [-] | | I wasn't enumerating all sources of latency. I wrote largely, as after some amount of distance all the other factors are not really relevant in a normally functioning network (one without extreme congestion). |
|
|
|
|
|