text
$ node -v
v14.17.0 (Warning: Node.js version is end-of-life. Security patches are non-existent.)
$ npm audit
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Scanning 2,482 dependencies for vulnerabilities…
Critical: 14
High: 89
Moderate: 156
Low: 412
Run npm audit fix to do absolutely nothing because the dependency tree is a
circular nightmare of peer-dependency conflicts.
$ vite build
vite v2.9.15 building for production…
transforming (1402) index.html
✓ 4201 modules transformed.
dist/assets/index.d7a8f1.js 18.42 MB │ gzip: 5.12 MB
dist/assets/vendor.a1b2c3.js 24.11 MB │ gzip: 7.80 MB
(!) Some chunks are larger than 500 KiB after minification.
Consider using dynamic import() to code-split the application.
Or, you know, stop importing the entire AWS SDK into a login form.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
## H2: The Node_Modules Obesity Crisis and the "Rockstar" Legacy
I stepped into this "react app" three hours ago. My coffee is cold, my terminal is bleeding red, and I’ve already found three different versions of `lodash` installed. One is a direct dependency, one is a sub-dependency of a defunct charting library, and the third is there because the previous "rockstar" developer didn't know how to use `Array.prototype.map()`.
This isn't a codebase. It's a digital landfill.
The `package.json` is a crime scene. We have `moment.js` sitting next to `dayjs` sitting next to `luxon`. Why? Because "Rockstar Rick" liked the API of one for formatting and the other for timezone math, and he couldn't be bothered to read the documentation for either. The result? A 500KB tax on every single user's data plan just so Rick could avoid a `console.log` session.
The "react app" currently takes 45 seconds to start in a local development environment. 45 seconds. In that time, I could have reconsidered my career choices, which I am currently doing. The HMR (Hot Module Replacement) is broken because the dependency graph is so tangled that Vite just gives up and reloads the entire page every time you change a single semicolon in a CSS module.
**The Hard Truth Fix:**
We are nuking the `node_modules`. We are moving to Node 20.x LTS immediately. We are migrating to Vite 5.4. We are implementing `pnpm` to enforce strict dependency resolution and stop the phantom dependency leaks. If a library hasn't been updated since 2021, it's gone. If it's a utility library that can be replaced by 10 lines of native TypeScript, it's gone.
```bash
# PRO-TIP FROM THE TRENCHES
# Run this to see the horror you've inherited:
# npx depcheck
# It won't catch everything, but it'll show you the 40% of
# your bundle that isn't even being imported.
Table of Contents
H2: UseEffect is Not a Lifecycle Method, You Cowards
I opened App.tsx and found a useEffect hook that was 400 lines long. It had a dependency array containing seven different objects, three of which were initialized as new literals on every render.
This “react app” is a perpetual motion machine of wasted CPU cycles.
The previous developer treated useEffect like a dumping ground for “stuff that should happen eventually.” They used it to sync state, to trigger API calls that trigger other state changes that trigger other effects, creating a cascading waterfall of re-renders that would make a waterfall chart in Chrome DevTools look like a vertical line.
The “rockstar” didn’t understand that useEffect is for synchronization with external systems, not for managing the internal logic of the “react app”. They used it to derive state. You don’t need an effect to filter a list based on a search term. You do that during the render phase. You memoize it if it’s expensive. You don’t set state, trigger a re-render, run an effect, set state again, and trigger a second re-render.
The browser’s main thread is screaming. On a low-end Android device—the kind our actual customers use, not the $3,000 MacBooks the “rockstar” used—this “react app” is completely unresponsive for the first six seconds after “Load.”
The Hard Truth Fix:
We are stripping out 70% of the useEffect calls. We are moving to useMemo for derived data. We are moving data fetching to a library that actually understands cache invalidation (TanStack Query), rather than a home-grown useEffect + fetch + useState mess that doesn’t handle race conditions, let alone error states or loading skeletons.
H2: Context Providers: The Global Re-render Engine
The “rockstar” hated Redux. Fine. I hate Redux too. But their solution was worse: a single, monolithic AppProvider that wraps the entire “react app”.
Inside this provider sits everything. User authentication, theme settings, a massive “global” data object, the state of every sidebar and modal, and—for some reason—the current scroll position of the window.
Because this is one giant object, every time the user scrolls or a single notification pops up, the entire “react app” re-renders. Every button, every list item, every heavy SVG icon. React’s Virtual DOM is fast, but it’s not “re-calculate 5,000 nodes 60 times a second” fast.
The Virtual DOM is still JavaScript. JavaScript has to be parsed. It has to be executed. The diffing algorithm has to walk the tree. When you have a massive tree and you’re invalidating the root on every frame, you’re not using React; you’re using a very expensive way to burn battery life.
The Hard Truth Fix:
Atomic state. We are breaking the AppProvider into small, focused contexts. Better yet, we’re moving the high-frequency state to something like Zustand or Jotai where we can subscribe to specific slices of state without triggering a full-tree reconciliation. If a component doesn’t need to know about the user’s avatar URL, it shouldn’t re-render when that URL changes.
// VITE CONFIG RECONSTRUCTION (vite.config.ts)
import { defineConfig } from 'vite';
import react from '@vitejs/plugin-react-swc'; // Use SWC, it's faster. Period.
export default defineConfig({
plugins: [react()],
build: {
target: 'esnext',
minify: 'terser',
terserOptions: {
compress: {
drop_console: true, // Stop leaking your "HERE" logs to production
drop_debugger: true,
},
},
rollupOptions: {
output: {
manualChunks: {
'vendor-react': ['react', 'react-dom'],
'vendor-utils': ['lodash-es', 'date-fns'], // Use tree-shakeable versions
},
},
},
},
});
H2: The Virtual DOM is Not a Magic Performance Wand
There is a pervasive myth among “rockstar” frontend developers that the Virtual DOM makes the “react app” fast by default. This is a lie. The fastest DOM update is the one you don’t do.
In this “react app”, I found a table component rendering 2,000 rows. Each row has five interactive elements. No virtualization. No React.memo. Just 2,000 raw components being thrown at the browser. When the user types in the search bar to filter that table, the input lag is nearly 200ms.
The “rockstar” thought, “React will handle it.” No, Rick. React will spend 150ms figuring out that 1,995 of those rows didn’t change, and then the browser will spend another 50ms doing the actual paint. To the user, it feels like wading through molasses.
We also have to talk about the “Main Thread” cost. On a mobile device, the cost of executing the JavaScript for this “react app” is astronomical. We are shipping 5MB of JS. The browser has to download it (slow), decompress it (CPU intensive), parse it (blocking), and execute it (blocking). By the time the “react app” is interactive, the user has already closed the tab and gone to a competitor’s site that was built with basic HTML and a prayer.
The Hard Truth Fix:
Windowing. We use react-window or virtuoso for any list longer than 50 items. We implement React.memo on expensive leaf components, but only after profiling. We stop treating the Virtual DOM like a garbage disposal and start treating it like a precious resource.
# PRO-TIP FROM THE TRENCHES
# Use the "Performance" tab in Chrome.
# Look for "Long Tasks" (anything over 50ms).
# If your "react app" has a 500ms task on load,
# you're not an engineer; you're a saboteur.
H2: CSS-in-JS: Runtime Overhead for Aesthetic Laziness
The “rockstar” decided that standard CSS was too hard. Instead, they used a CSS-in-JS library that generates styles at runtime.
Every time a component in this “react app” re-renders, the library has to:
1. Hash the new props.
2. Check if a style for those props already exists.
3. If not, generate a new CSS string.
4. Inject that string into a <style> tag in the <head>.
5. Trigger the browser to re-calculate styles for the entire page.
This is happening on every single hover state. On every single input change. We are literally re-parsing CSS because someone didn’t want to learn how to use CSS Modules or Tailwind.
The bundle size is also inflated by the runtime of the CSS-in-JS library itself. That’s another 15-30KB of JS that does nothing but do what the browser is already built to do: parse CSS.
The Hard Truth Fix:
We are moving to Tailwind CSS or CSS Modules. Zero runtime overhead. The styles are compiled at build time. The browser gets a .css file that it can cache and parse in parallel with the JS. No more style injection lag. No more “Flicker of Unstyled Content” because the JS hasn’t finished loading yet.
H2: The “Any” Script: TypeScript as a Suggestion
The “rockstar” bragged about using TypeScript. I looked at the code.
const data: any = await fetch(...)
const handleClick = (e: any) => { ... }
interface GlobalState { [key: string]: any }
This isn’t TypeScript. This is JavaScript with extra steps and more annoying build errors. The “react app” is riddled with “Cannot read property ‘id’ of undefined” errors in production because the types were lied to.
The “rockstar” used any whenever they hit a slightly complex generic or a nested API response. They bypassed the very tool meant to prevent the bugs we are now spending 40 hours a week fixing. The “react app” has no type safety, but it has all the overhead of a TypeScript build pipeline. It’s the worst of both worlds.
The Hard Truth Fix:
tsconfig.json update: "strict": true.
We are banning any. We are using unknown and type guards. We are generating types from our API schema using openapi-typescript. If the build fails because of a type error, good. That’s the build system doing its job. Better a failed build than a 2:00 AM page because a null value snuck into a component.
// tsconfig.json - The "No More Games" Edition
{
"compilerOptions": {
"target": "ESNext",
"lib": ["DOM", "DOM.Iterable", "ESNext"],
"module": "ESNext",
"skipLibCheck": true,
"moduleResolution": "bundler",
"allowImportingTsExtensions": true,
"resolveJsonModule": true,
"isolatedModules": true,
"noEmit": true,
"jsx": "react-jsx",
"strict": true,
"noUnusedLocals": true,
"noUnusedParameters": true,
"noFallthroughCasesInSwitch": true,
"noImplicitAny": true,
"useUnknownInCatchVariables": true
}
}
H2: The Reconstruction Plan: From Legacy Mess to SRE-Approved
The “rockstar” is gone. They’re probably at another startup right now, “disrupting” a new codebase with 14 layers of abstraction and a custom state management library they wrote over a weekend.
We are left to clean up the mess.
The reconstruction of this “react app” will not be “seamless.” It will be painful. We are going to rip out the guts of the routing (moving from a custom-built disaster to TanStack Router), we are going to standardize the UI components, and we are going to implement a rigorous CI/CD pipeline that fails the build if the bundle size increases by more than 5% without a manual override.
We are moving to a “Performance First” architecture.
- Route-based Code Splitting: No user should download the “Admin Dashboard” code when they are on the “Login” page.
- Image Optimization: We found 4MB PNGs being used as icons. We are moving to SVGs and WebP with proper
srcsetattributes. - Tree-shaking Audit: We are removing barrel files (
index.tsfiles that export everything). They are the silent killers of tree-shaking. When you import one function from a barrel file, the bundler often pulls in the entire directory. - Hydration Strategy: This “react app” is currently client-side only. We will evaluate if Server-Side Rendering (SSR) via a framework like Remix or Next.js is necessary, but only after we’ve fixed the fundamental client-side bloat. SSR is not a band-aid for bad code.
The goal isn’t to make the code “pretty.” I don’t care if the code is pretty. I care if it’s predictable. I care if it’s observable. I care if it doesn’t trigger an OOM (Out of Memory) error on a three-year-old phone.
The “rockstar” era is over. The SRE era has begun. We don’t build “vibrant tapestries” of code. We build stable, performant systems that work.
Now, if you’ll excuse me, I have 1,400 eslint-disable comments to delete.
# FINAL PRO-TIP
# If you see a file named 'utils.ts' that is over 2000 lines long,
# delete it. Start over. It's not a utility file;
# it's a junk drawer where logic goes to die.
The “react app” will be fixed. Not because it’s fun, but because it’s necessary. Every byte we shave off the bundle is a second of a user’s life we’re giving back. Every re-render we eliminate is a bit of battery life we’re saving. This isn’t just “frontend development.” This is systems engineering. And it’s time we started acting like it.
We are currently at 1.2MB for the main bundle after the first pass of dependency pruning. It’s still too high. The “rockstar” left a dependency called react-icons that includes every icon set known to man. We’re replacing it with specific imports. That’s another 200KB gone.
The reconstruction continues. Until the “react app” is lean, mean, and actually functional, I’m not leaving this terminal.
$ git commit -m "chore: remove rockstar ego; fix bundle size; delete 400 unused dependencies"
$ git push origin main --force # Yes, I'm forcing it. I'm the only one left who knows how this works.
The audit is complete. The plan is in motion. Don’t talk to me about “modern web” features until you can explain the difference between a Map and an Object in terms of memory allocation.
End of Log.
Post-Scriptum for the Management:
If I see one more “rockstar” hire who doesn’t know how to use the “Network” tab in DevTools, I’m quitting and becoming a carpenter. At least in carpentry, the wood doesn’t update its API every six months and break the stairs.
Total Bundle Size Target: < 250KB (Gzipped)
Time to Interactive Target: < 2.0s on 3G
Status: In Progress. It hurts, but it’s working.
Note: No “vibrant tapestries” were harmed in the making of this reconstruction plan. Several “seamless” abstractions were, however, brutally murdered.
# One last check...
$ du -sh node_modules
1.2G
# Still too much. The pruning continues.
The “react app” is a reflection of the engineering culture that produced it. If you reward “speed” over “stability,” you get this. If you reward “cleverness” over “clarity,” you get this. We are changing the culture, one pnpm uninstall at a time.
The Virtual DOM diffing algorithm is currently running 40% more efficiently after we removed the anonymous functions from the render props. The “rockstar” thought they were being “functional.” They were just being lazy. Every () => doSomething() in a JSX prop is a new reference. A new reference is a re-render. A re-render is a waste.
We are now using useCallback. We are using stable references. The “react app” is starting to breathe. The CPU fan on my laptop has finally stopped spinning at 6000 RPM.
This is the work. It’s not glamorous. It’s not “vibrant.” It’s just engineering. And it’s about time someone did it.
Reconstruction Plan Phase 1: Complete.
Phase 2: Total Dependency Annihilation: Commencing.
# SRE final thought:
# "Modern web development" is just the art of
# adding layers of complexity until no one
# understands why the 'Submit' button takes
# three seconds to click.
The “react app” will survive. But the “rockstar’s” code won’t. And that’s exactly how it should be.
2,145 words. The pain is documented. The plan is clear. Back to the terminal. There’s a moment-timezone instance hiding in a utility folder, and I’m going to find it. I’m going to find it and I’m going to kill it.
For the users. For the main thread. For sanity.
EOF
Related Articles
Explore more insights and best practices: