The RSC hype has cleared. Two years of production experience has replaced conference talks with actual patterns. Here's what I've learned shipping RSC applications across e-commerce, SaaS dashboards, content sites, and internal tooling.
The Mental Model Shift Is Real
The biggest mistake teams make with RSC is treating it as a performance optimisation layered onto existing component architecture. It's not. RSC requires a fundamentally different way of thinking about where data lives, where computation happens, and where interactivity belongs.
In a traditional React application, every component is a client component. You fetch data, manage state, and render HTML all in the browser. The server is just an API layer. In RSC, the default is inverted: components run on the server. The browser only receives what's explicitly marked as client code.
This inversion is powerful but disorienting. Teams that try to "add RSC" to an existing component library without redesigning their data layer end up with hybrid architectures that are harder to reason about than either pure client or pure server rendering.
Where RSC Actually Wins
The performance benefits of RSC are real but they're not what you think. The common pitch is "zero client JavaScript for server components." True. But the more important benefit is co-location of data fetching with rendering.
In a client-side React app, a page typically triggers a cascade of requests: the browser loads HTML, loads the JavaScript bundle, mounts the component tree, fires off data fetches, waits for responses, re-renders. This cascade is why first contentful paint and time to interactive are often seconds apart on data-heavy pages.
With RSC, data fetching happens on the server before any HTML reaches the browser. The component awaits its data, renders, and ships HTML. The browser paints immediately. On a fast server with a co-located database, this eliminates the fetch-render cascade entirely.
I've measured 2-4x improvements in Largest Contentful Paint on product pages that moved data fetching into server components. The wins are most pronounced when the page is data-heavy and the user is on a slow connection.
The Component Boundary Problem
The hardest architectural decision in RSC is where to draw the client boundary. Every `use client` directive you add creates a client component subtree—anything that component renders will also be a client component unless explicitly passed as children from a server component.
The naive approach is to mark interactive components as `use client` and leave everything else as server components. This breaks down quickly when you have shared layout components that need both server data and client interactivity.
The pattern that works: separate data concerns from interaction concerns. Build server components that fetch data and render static structure. Pass interactive pieces as children (server components can render client components as children). Keep client components small and focused on specific interactions.
I build what I call "data shells" in server components—components that fetch and structure data but render their interactive pieces via props or children. The interactive pieces are client components that receive their initial data as props rather than fetching it themselves.
Caching Is Non-Obvious
Next.js's RSC caching semantics have changed significantly across versions, and the mental model for what is and isn't cached is genuinely complex. I've seen teams ship RSC applications that re-fetch the same data on every request because they didn't understand that `fetch()` in server components is only cached automatically when called with specific options.
Current pattern that works reliably in Next.js: use `cache()` from React for request-level deduplication, and `'use cache'` directive at the component or function level for persistent caching with explicit tags. Don't rely on automatic fetch caching behaviour—it's been too inconsistent across versions.
Streaming Is Your Friend but Needs Explicit Design
Suspense boundaries and streaming work beautifully in RSC—when you design for them. The issue is that streaming requires you to think about what data is critical for first paint and what can be deferred.
Most teams initially wrap too little in Suspense, causing the entire page to block on the slowest data fetch. Others wrap too much, producing a page that flashes loading states aggressively.
The rule I use: anything above the fold that the user needs to understand the page should block. Everything else—related items, secondary stats, personalised recommendations—should stream in behind a Suspense boundary with a skeleton fallback.
What I'd Tell a Team Starting Today
Start server-first. Design your component tree assuming everything is a server component, then add `use client` only where you genuinely need interactivity. This is easier to maintain than starting client-first and trying to move things to the server later.
Invest in your data layer before you invest in your component architecture. The value of RSC is proportional to the quality of your data access patterns. If your data fetching is already fast and efficient, RSC gives you a clean rendering model. If your data layer is a mess of N+1 queries and uncached API calls, RSC will make your performance worse because slow server fetches block rendering.
And test your Suspense boundaries deliberately. Don't assume they'll work—build a page that intentionally makes a slow request and verify the skeleton renders correctly and the content streams in as expected. The behaviour in development (which doesn't throttle) is meaningfully different from production on a cold serverless instance.