Remix.run Logo
hamdingers 4 days ago

That's correct, and the most competitive multiplayer games tend to have fixed tick rates on the server, but the higher FPS is still beneficial (again, theoretically for all but the highest level of competition) because your client side inputs are sampled more frequently and your rendered frames are at most a couple ms old.

adastra22 4 days ago | parent [-]

I think you're missing the point. The game could be processing input and doing a state update at 1000Hz, while still rendering a mere 60fps. There doesn't have to be any correlation whatsoever between frame rate and input processing. Furthermore, this would actually have less latency because there won't be a pipeline of frame buffers being worked on.

Tying the input loop to the render loop is a totally arbitrary decision that the game industry is needlessly perpetuating.

hamdingers 4 days ago | parent | next [-]

No, I'm explaining how most games work in practice.

You're right a game could be made that works that way. I'm not aware of one, but I don't have exhaustive knowledge and it wouldn't surprise me if examples exist, but that was not the question.

adastra22 4 days ago | parent [-]

I would not at all be surprised that there are examples out there, although I don't know of them. Tying the game state to the render loop is decision made very deep in the game engine, so you'd have to do extensive modifications to change any of the mainstream engines to do something else. Not worth the effort.

But a greenfield code shouldn't be perpetuating this mistake.

whstl 4 days ago | parent | next [-]

That's a super interesting discussion

On most modern engines there is already a fixed-step that runs at a fixed speed to make physics calculation deterministic, so this independence is possible.

However, while it is technically possible to run the state updates at a higher frequency, this isn't done in practice because the rendering part wouldn't be able to consume that extra precision anyway.

That's mainly because the game state kinda needs to remain locked while: 1) Rendering a frame to avoid visual artifacts (eg: the character and its weapon are rendered at different places because the weapon started rendering after a state change), or even crashes (due to reading partially modified data); 2) while fixed step physics updates are being applied and 3) if there's any kind of work in different threads (common in high FPS games).

You could technically copy the game-state functional-style when it needs to be used, but the benefits would be minimal: input/state changes are extremely fast compared to anything else. Doing this "too early" can even cause input lag. So the simple solution is just to do state change it at the beginning of the while loop, at the last possible moment before this data is processed.

Source: worked professionally with games in a past life and been in a lot of those discussions!

vintermann 4 days ago | parent | prev [-]

I can give an example. I'd heard that Super Meat Boy was hard, and it was, but it turned out, if you ran it at the 60hz it was designed for instead of 75hz, it was considerably easier. At 120hz it was unplayable.

You kind of understand how the game loop is tied to the refresh rate in games like this, though. Practicing "pixel perfect" jumps must be challenging if the engine updates aren't necessarily in sync with what goes on on screen. And in the really old days (when platformers were invented!) there was no real alternative to having the engine in sync with the screen.

adastra22 4 days ago | parent [-]

In the model I am describing there would be whole game state updates on every tick cycle, completely decoupling the frame rate from the response latency and prediction steps.

fizzynut 4 days ago | parent | prev [-]

Doing that will increase input latency, not decrease it.

There are many tick rates that happen at the same time in a game, but generally grabbing the latest input at the last possible moment before updating the camera position/rotation is the best way to reduce latency.

It doesn't matter if you're processing input at 1000hz if the rendered output is going to have 16ms of latency embedded in it. If you can render the game in 1ms then the image generated has 1ms of latency embedded in to it.

In a magical ideal world if you know how long a frame is going to take to render, you could schedule it to execute at a specific time to minimise input latency, but it introduces a lot of other problems like both being very vulnerable to jitter and also software scheduling is jittery.