▲ | sfn42 4 days ago | |||||||
This works for some websites, not all. And often it results in significant loading times, because you're loading tons of unnecessary data. In many cases it is much better to send requests more frequently, similarly to how a server side rendered website would behave. Instead of fetching 10,000 products, just fetch a page of products. Much faster. Have endpoints for filters and searches, also paginated. This can be fast if you do it right. Since you're only sending minimal amounts of data everything is fast. The server can handle lots of requests and clients don't need a powerful connection to have a good experience. Network requests themselves are not slow. Even a computer with a bad connection and long distance will probably have less than 500ms round trip. More like 50 to a few hundred at most. Anything beyond that is not the client, it's the server. If you make the backend slow then it'll be slow. If you make the request really large then the bad connection will struggle. It's also worth mentioning that I would much rather deliver a good service to most users than make everyone's experience worse just for the sake of allowing someone to load the page and continue using it offline. Most websites don't make much sense to use offline anyway. You need to send some requests, best approach is simply to make them fast and small. | ||||||||
▲ | Izkata 4 days ago | parent | next [-] | |||||||
> In many cases it is much better to send requests more frequently, similarly to how a server side rendered website would behave. Instead of fetching 10,000 products, just fetch a page of products. Much faster. Have endpoints for filters and searches, also paginated. This can be fast if you do it right. Since you're only sending minimal amounts of data everything is fast. This is actually a great example of what I mentioned elsewhere about how people seem to have forgotten how to make a SPA responsive. These are both simpler implementations, but not really the best choice for user interaction. A better solution is to take the paginated version and pre-cache what the user might do next: When the results are loaded, return page 1 and 2, display page 1, and cache page 2 so it can be displayed immediately if clicked on. If they do, display it immediately and silently request and cache page 3 in the background, etc etc. This keeps the SPA responsive with few to no loading spinners if they take the happy path, and because it's happening in the background you can automatically do retries for a flaky connection without bothering the user. This was how gmail and google maps blew peoples' minds when they were first released, by moving data to the frontend and pushing requests to the server into the background, the user could keep working without interruption while updates happened the background without interrupting their flow. | ||||||||
| ||||||||
▲ | LegionMammal978 4 days ago | parent | prev [-] | |||||||
> Network requests themselves are not slow. Even a computer with a bad connection and long distance will probably have less than 500ms round trip. More like 50 to a few hundred at most. Anything beyond that is not the client, it's the server. Not if the client is, e.g., constantly moving between cell towers, or right near the end of their range, a situation that frequently happens to me on road trips. Some combination of dropped packets and link-layer reconnections can make a roundtrip take 2 seconds at best, and upwards of 10 seconds at worst. I don't at all disagree that too many tiny requests is the cause of many slow websites, nor that many SPAs have that issue. But it isn't a defining feature of the SPA model, and nothing's stopping you from thoughtfully batching the requests you do make. What I mainly dislike is the idea of saving a bit of client effort at the cost of more roundtrips. E.g., one can write an SSR site where every form click takes a roundtrip for validation, and also rejects inputs until it gets a response. Many search forms in particular are guilty of this, and also run on an overloaded server. Bonus points if a few filter changes are enough to hit a 429. That is to say, SSR makes sense for websites with little interaction, such as HN or old Reddit, which still run great on high-latency connections. But I get the sense it's being pushed into having the server respond to every minor redraw, which can easily drive up the number of roundtrips. Personally, having learned web development only a few years ago, my impression is that roundtrips are nearly the costliest thing there is. A browser can do quite a lot in the span of 100,000 μs. Yet very few people seem to care about what's going over the wire. If done well, the SPA model seems to offer a great way to reduce this cost, but it's been tainted by the proliferation of overly massive JS blobs. I guess the moral of the story is "people can write poorly-written websites in any rendering model, and there's no panacea except for careful optimization". Though I still don't get how JS blobs got so big in the first place. | ||||||||
|