| ▲ | o11c 3 days ago |
| The "refresh causes load" issue can be solved by doing long-polling instead of short-polling. Note that the http-equiv refresh will only trigger after the page is fully-loaded, which long-polling does not allows to happen, so you do have resilience for the case where the long-poll is interrupted mysteriously. |
|
| ▲ | glroyal 2 days ago | parent | next [-] |
| The point of the refresh (which can be activated with a meta tag) is that JavaScript is disabled in the game's server-rendered mode, so AJAX/Comet is out of the question. |
| |
| ▲ | o11c 2 days ago | parent [-] | | You don't need JS to do long-polling, just keep the main page's connection open without writing the trailing `</html>` This does limit what you can do with the poll-added content, but simply allowing the refresh to take place is a strict improvement over refreshing eagerly. |
|
|
| ▲ | YannickR 3 days ago | parent | prev | next [-] |
| I haven’t tried this yet, but if it works this would be a very smart solution to the problem, as it could potentially also reduce delays between turns. |
|
| ▲ | motorest 2 days ago | parent | prev | next [-] |
| > The "refresh causes load" issue can be solved by doing long-polling instead of short-polling. ...and now you have to greatly scale up your backend infrastructure to be able to handle all those open connections to handle each and every single active user. |
| |
| ▲ | o11c 2 days ago | parent [-] | | With any decent backend implementation, idle connections should be really cheap - measured in individual pages, and the hard part is figuring out how to count the kernel side. | | |
| ▲ | motorest 2 days ago | parent [-] | | > With any decent backend implementation, idle connections should be really cheap (...) Not exactly. With sync calls each server instance can handle only a few hundred connections. With async calls each instance can in theory handle tens of thousands of concurrent requests but each polling response van easily spike CPU and network loads. This means your "it works on my machine" implementation barely registers any load whereas once it enters operation your dashboards start to look very funny and unpredictable, and your scaling needs become far greater just to be able to gracefully handle those spikes. This is a radical departure from classic request-and-response patterns where load is more predictable. | | |
| ▲ | nasretdinov 2 days ago | parent [-] | | With long polling you don't have the application logoc handle the waiting part — that would be too expensive. You typically have a separate service that holds the open connections until notified to then call the actual backend |
|
|
|
|
| ▲ | 38 3 days ago | parent | prev [-] |
| [dead] |