Remix.run Logo
gherkinnn 10 hours ago

The article lists the significant performance gains. Why render on wimpy phones over bad network when a cheap aws box can do it for you?

That aside, Next.js and the recent related vulnerabilities made me weary of RSC and I struggle to see the benefit of RSCs over the previous server side rendered and hydrated model. Chances are TanStack will do a better job than Vercel and yet the bumpy ride of the last few years tarnished the whole idea.

nfw2 10 hours ago | parent | next [-]

1. Rendered content, if there is enough of it, will be more content to send across wire than a cached bundle.

2. Cached bundles are cached. Network doesnt matter when its cached

3. Even bottom of the barrel motorolas are not wimpy nowadays

4. The obvious reasons why I dont want my aws box to do rendering is because it will need to everyone's rendering, and how big "everyone" is in not constant. It's another moving part in a complex system that can break. Also because I have to pay for the box.

5. Fast networks are becoming more and more ubiquitous

6. The performance gains are for a static site, which won't necessarily be representative of typical saas. How do you measure the risk and cost of my site breaking because my date rendering server got overloaded?

troupo 9 hours ago | parent | next [-]

> Even bottom of the barrel motorolas are not wimpy nowadays

They are: https://infrequently.org/2025/11/performance-inequality-gap-...

That said, RSCs and the rest of the "let's render a static site but let's also send a multimegabyte bundle for 'hydration'" is still wrong

nfw2 9 hours ago | parent [-]

I am going to base my opinion on using the bottom of the barrel Motorola that I own rather than reading that novel

troupo 7 hours ago | parent [-]

"I'd rather base my opinion on my own personal anecdote than based on stats". My "they are" was referring not to your specific Motorola, but to the "bottom barrel". Which, while improving, still doesn't even remotely justify the bundle sizes or "fat networks".

--- start quote ---

The median mobile page is now 2.6 MiB, blowing past the size of DOOM (2.48 MiB) in April [2025]. The 75th percentile site is now larger than two copies of DOOM. P90+ sites are more than 4.5x larger, and sizes at each point have doubled over the past decade.

...

Compared with early 2024's estimates, we're seeing budget growth of 600+KiB for three seconds, and a full megabyte of extra headroom at five seconds

--- end quote ---

Translation: for P75 (aka for 75% of users) to get a site load in three seconds you need to ship at most 600KB of Javascript

nfw2 42 minutes ago | parent [-]

Ask any ux specialist, observing base reality (observe someone using x) gives a better impression of usability than any statistics will

gherkinnn 5 hours ago | parent | prev [-]

Is serialising a model and building JSON that much more expensive than rendering HTML?

zarzavat 9 hours ago | parent | prev | next [-]

It's not 2010 anymore. Client compute is fast. Server compute is slow and expensive. 4G is ubiquitous and 3G is being phased out.

You can send a tiny amount of JS from a CDN and render on the client. You will save money because the server is efficiently serving JSON instead of doing a gazillion calls and string interpolation per request. The user won't notice.

Also, now that the server is responding with JSON it doesn't need to run any JS at all, so you can rewrite the server in an even more efficient language and save even more money.

troupo 7 hours ago | parent [-]

> It's not 2010 anymore. Client compute is fast.

It's not: https://infrequently.org/2025/11/performance-inequality-gap-...

zarzavat 2 hours ago | parent | next [-]

Yes it is. Even a cheap Xiaomi is fast enough to render any application you can think of.

React is not the bottleneck. The bottleneck is all the bloat in the application code.

Webdevs act as if optimization isn't a thing and the only solution to any performance issue is to add more hardware. This explains the popularity of server-side rendering: it's a way of solving a performance issue by "adding more hardware" to the user's phone.

Yet, the user's phone was always perfectly capable of doing what they needed it to do. The problem is their application code is an unoptimized turd. They could optimize it but that would be work. Fortunately, a helpful cloud computing service has the solution: just offload your unoptimized turd to servers in the cloud!

"Sounds fantastic," the webdevs said. "Anything to avoid opening devtools."

Except, that's two bad technical decisions. The first is spending money on compute that they don't even need. The second is that compute now has to be JavaScript. What could be a highly efficient Rust or Go API server blasting out JSON at light speed is now stuck running JS and React. Somewhere, someone at Vercel looks at their quarterly earnings and smiles.

coder97 5 hours ago | parent | prev [-]

Seems like using a low tier android gives you a nice reality check.

4 hours ago | parent | prev [-]
[deleted]