Remix.run Logo
nfw2 10 hours ago

I still don't get why RSC is better. This post takes things for granted that don't seem obvious to me. Why would I want heavy rendering tasks to all be done on my wimpy aws box instead of the clients macbooks and iphones?

Shipping moment for dates is a pain sure but that can be chunked and cached too? It's hard to imagine the benefit of reducing bundle by X kbs could really be worth doing a roundtrip to server whenever I need format a date in the UI.

RSC seems like something only library maintainers like, although I appreciate tanstack not forcing them down my throat like next I guess.

gherkinnn 10 hours ago | parent | next [-]

The article lists the significant performance gains. Why render on wimpy phones over bad network when a cheap aws box can do it for you?

That aside, Next.js and the recent related vulnerabilities made me weary of RSC and I struggle to see the benefit of RSCs over the previous server side rendered and hydrated model. Chances are TanStack will do a better job than Vercel and yet the bumpy ride of the last few years tarnished the whole idea.

nfw2 10 hours ago | parent | next [-]

1. Rendered content, if there is enough of it, will be more content to send across wire than a cached bundle.

2. Cached bundles are cached. Network doesnt matter when its cached

3. Even bottom of the barrel motorolas are not wimpy nowadays

4. The obvious reasons why I dont want my aws box to do rendering is because it will need to everyone's rendering, and how big "everyone" is in not constant. It's another moving part in a complex system that can break. Also because I have to pay for the box.

5. Fast networks are becoming more and more ubiquitous

6. The performance gains are for a static site, which won't necessarily be representative of typical saas. How do you measure the risk and cost of my site breaking because my date rendering server got overloaded?

troupo 9 hours ago | parent | next [-]

> Even bottom of the barrel motorolas are not wimpy nowadays

They are: https://infrequently.org/2025/11/performance-inequality-gap-...

That said, RSCs and the rest of the "let's render a static site but let's also send a multimegabyte bundle for 'hydration'" is still wrong

nfw2 9 hours ago | parent [-]

I am going to base my opinion on using the bottom of the barrel Motorola that I own rather than reading that novel

troupo 7 hours ago | parent [-]

"I'd rather base my opinion on my own personal anecdote than based on stats". My "they are" was referring not to your specific Motorola, but to the "bottom barrel". Which, while improving, still doesn't even remotely justify the bundle sizes or "fat networks".

--- start quote ---

The median mobile page is now 2.6 MiB, blowing past the size of DOOM (2.48 MiB) in April [2025]. The 75th percentile site is now larger than two copies of DOOM. P90+ sites are more than 4.5x larger, and sizes at each point have doubled over the past decade.

...

Compared with early 2024's estimates, we're seeing budget growth of 600+KiB for three seconds, and a full megabyte of extra headroom at five seconds

--- end quote ---

Translation: for P75 (aka for 75% of users) to get a site load in three seconds you need to ship at most 600KB of Javascript

nfw2 38 minutes ago | parent [-]

Ask any ux specialist, observing base reality (observe someone using x) gives a better impression of usability than any statistics will

gherkinnn 5 hours ago | parent | prev [-]

Is serialising a model and building JSON that much more expensive than rendering HTML?

zarzavat 9 hours ago | parent | prev | next [-]

It's not 2010 anymore. Client compute is fast. Server compute is slow and expensive. 4G is ubiquitous and 3G is being phased out.

You can send a tiny amount of JS from a CDN and render on the client. You will save money because the server is efficiently serving JSON instead of doing a gazillion calls and string interpolation per request. The user won't notice.

Also, now that the server is responding with JSON it doesn't need to run any JS at all, so you can rewrite the server in an even more efficient language and save even more money.

troupo 7 hours ago | parent [-]

> It's not 2010 anymore. Client compute is fast.

It's not: https://infrequently.org/2025/11/performance-inequality-gap-...

zarzavat 2 hours ago | parent | next [-]

Yes it is. Even a cheap Xiaomi is fast enough to render any application you can think of.

React is not the bottleneck. The bottleneck is all the bloat in the application code.

Webdevs act as if optimization isn't a thing and the only solution to any performance issue is to add more hardware. This explains the popularity of server-side rendering: it's a way of solving a performance issue by "adding more hardware" to the user's phone.

Yet, the user's phone was always perfectly capable of doing what they needed it to do. The problem is their application code is an unoptimized turd. They could optimize it but that would be work. Fortunately, a helpful cloud computing service has the solution: just offload your unoptimized turd to servers in the cloud!

"Sounds fantastic," the webdevs said. "Anything to avoid opening devtools."

Except, that's two bad technical decisions. The first is spending money on compute that they don't even need. The second is that compute now has to be JavaScript. What could be a highly efficient Rust or Go API server blasting out JSON at light speed is now stuck running JS and React. Somewhere, someone at Vercel looks at their quarterly earnings and smiles.

coder97 5 hours ago | parent | prev [-]

Seems like using a low tier android gives you a nice reality check.

4 hours ago | parent | prev [-]
[deleted]
dminik 6 hours ago | parent | prev | next [-]

It's a really weird situation, but using public transport WiFi cured me of this thinking.

The amount of times that the initial HTML, CSS and JS came through, but then choked on fetching the page content was insane. Staring at a spinner is more insulting than the page just not loading.

That being said, I'm not a huge fan of RSCs either. Dumping the entire VDOM state into a script tag then loading the full React runtime seems like a waste of bandwidth.

_heimdall 3 hours ago | parent | prev | next [-]

Just because data can be rendered to DOM on the client doesn't mean it always should be.

I'll try to render HTML wherever the data is stored. Meaning, if the data lives in a hosted database I'll render on the server. If data is only stored on the client, I'll render there.

Its less about bundle size in my opinion and more about reduced complexity and data security.

That said, I've never been a fan of RSC and don't see it solving the "reduced complexity" goal.

danielhep 10 hours ago | parent | prev | next [-]

Without RSC you have to wait for the user to download the application bundle before the request for content can even be sent to the server. So that means that the db queries and stuff are not even initiated until the client has the bundle and runs it, vs with RSC that stuff is all launched the moment the first request comes in from the user.

nfw2 10 hours ago | parent | next [-]

That doesn't seem to be how this implementation of RSC is intended to work. Here, client code triggers the RSC fetch, which is treated as any other sort of data fetch. Presumably, it still waits for client code to load to do that.

Also SSR, even in React, existed well before RSCs did, and that seems to be really what you are talking about.

tannerlinsley 9 hours ago | parent | next [-]

Correct. People need to stop conflating SSR with RSC. Well said.

h14h 9 hours ago | parent | prev | next [-]

TanStack uses streams as the basis for loading RSC data, and recommends using a route loader to access them:

https://tanstack.com/start/latest/docs/framework/react/guide...

AFAIK, at least when using TanStack Router, this RSC implementation seems just as capable as the others when it comes to reducing server round trips.

danielhep 9 hours ago | parent | prev [-]

SSR is different and does not provide the same performance of RSCs. With SSR you get the advantage of an initially rendered page, but you don’t have access to data or state. So you are just rendering placeholders until it hydrates and the client can request the data.

RSCs allow you to render the initial page with the content loaded right away.

That said, I am not sure about Tanstack’s implementation. Need to spend more time reading about this.

Here’s a nice post explaining why RSCs do what SSR cannot: https://www.joshwcomeau.com/react/server-components/

nfw2 9 hours ago | parent [-]

You have it reversed. SSR in react without RSC gives you access to data and state on the client. That's what the hydration does. RSC strips it out to make the bundle smaller. There is no hydration

danielhep 8 hours ago | parent [-]

I mean the state from the client, like cookies and URL params. You can get access to that in SSR through the framework specific APIs like getServerSideProps in Next, but it’s not a great solution.

zarzavat 9 hours ago | parent | prev [-]

> Without RSC you have to wait for the user to download the application bundle before the request for content can even be sent to the server.

This is an argument for not putting all your JS in one monolithic bundle and instead parallelizing data loading and JS loading. It's not an argument for RSC.

danielhep 8 hours ago | parent [-]

Even if you split up the bundle you will still need multiple round trips to the server to fetch the data.

zarzavat 7 hours ago | parent [-]

Ignoring TLS we have:

1st RT: HTML and loader script (CDN)

2nd RT: data (app server) and code (CDN) in parallel

Therefore you need two. But not all roundtrips are equal. A roundtrip to a CDN is much faster than a roundtrip to an application server and database, unless you have some funky edge database setup.

If you render on the server, your first roundtrip is slow because the client has to wait for the application server and database before it can show anything at all. If you render on the client then your first roundtrip is fast but the second one is slow.

dbbk 4 hours ago | parent | prev | next [-]

Why should a low-powered Android phone be downloading and running a full Markdown parser or syntax highlighter? Stuff like that is obviously something that should be handled by the server and just returned as final HTML.

presentation 5 hours ago | parent | prev | next [-]

One example is that I have a fancy visualization in my app that is rendered in the server via RSC and just some interactive tidbits get sent to the client. If I packaged the whole visualization library it would have bloated my bundle size but instead I ship barely any JS and still get a nice interactive vector data viz experience. And the code just looks like normal react component nesting more or less.

5 hours ago | parent | prev | next [-]
[deleted]
h14h 9 hours ago | parent | prev | next [-]

If your use-cases don't benefit from RSC performance characteristics then they probably aren't outright better.

But I do think they're a compelling primitive from a DX standpoint, since they offer more granularity in specifying the server/client boundary. The TanStack Composite/slots API is the real selling point, IMO, and as far as I can tell this API is largely (entirely?) thanks to RSCs.

5 hours ago | parent | prev | next [-]
[deleted]
ai_slop_hater 8 hours ago | parent | prev | next [-]

Because with RSC you don't have a shitload of loading indicators and layout shifts.

makeitrain 7 hours ago | parent | prev | next [-]

SEO is a good reason.

lo1tuma 9 hours ago | parent | prev [-]

[dead]