▲ | LegionMammal978 4 days ago | |
> Network requests themselves are not slow. Even a computer with a bad connection and long distance will probably have less than 500ms round trip. More like 50 to a few hundred at most. Anything beyond that is not the client, it's the server. Not if the client is, e.g., constantly moving between cell towers, or right near the end of their range, a situation that frequently happens to me on road trips. Some combination of dropped packets and link-layer reconnections can make a roundtrip take 2 seconds at best, and upwards of 10 seconds at worst. I don't at all disagree that too many tiny requests is the cause of many slow websites, nor that many SPAs have that issue. But it isn't a defining feature of the SPA model, and nothing's stopping you from thoughtfully batching the requests you do make. What I mainly dislike is the idea of saving a bit of client effort at the cost of more roundtrips. E.g., one can write an SSR site where every form click takes a roundtrip for validation, and also rejects inputs until it gets a response. Many search forms in particular are guilty of this, and also run on an overloaded server. Bonus points if a few filter changes are enough to hit a 429. That is to say, SSR makes sense for websites with little interaction, such as HN or old Reddit, which still run great on high-latency connections. But I get the sense it's being pushed into having the server respond to every minor redraw, which can easily drive up the number of roundtrips. Personally, having learned web development only a few years ago, my impression is that roundtrips are nearly the costliest thing there is. A browser can do quite a lot in the span of 100,000 μs. Yet very few people seem to care about what's going over the wire. If done well, the SPA model seems to offer a great way to reduce this cost, but it's been tainted by the proliferation of overly massive JS blobs. I guess the moral of the story is "people can write poorly-written websites in any rendering model, and there's no panacea except for careful optimization". Though I still don't get how JS blobs got so big in the first place. | ||
▲ | sfn42 4 days ago | parent [-] | |
> Not if the client is, e.g., constantly moving between cell towers, or right near the end of their range, a situation that frequently happens to me on road trips. Right, but how often is a user both using my website and on a roadtrip with bad coverage? In the grand scheme of things, not very often. I also think this depends on what the round trip is for. Maybe the 10s round trip is simply because it's a rather large request. > I don't at all disagree that too many tiny requests is the cause of many slow websites That's not really what I was saying, though I don't disagree with it. If you're sending multiple small requests then there are two ways to go about it: You can send all of them at the same time, then wait for responses and handle them as they come back. The other option is to send a request, wait for a response, then send the next etc. The latter option causes slowness, because now you're stacking round trips on top of one another. The former option can be completely fine. But I'm not saying the client should be sending lots of requests. I'm saying they should get the data they need rather than all the data they could possibly need. This can be done in one request that gets a few kilobytes of data, you can fit 64kb in a single tcp packet. That's quite a bit of data, easily enough space to do useful stuff. For example the front page of HN is 8kb. It loads fast. I'm also not saying you should use SSR. I do think that SSR is a great way to build websites, but my previous comment was specifically about SPAs. You don't have to send requests for every little thing - you can validate forms on the frontend in both SPAs and SSR. Round trips are costly but not that much. A lot of round trips are unavoidable, what I'm saying is that you should avoid making them slower by sending too much data. And also avoid stacking them serially. |