Table of Contents
JavaScript Best Practices: Why Your “Clean Code” is Crashing My Production Nodes
It was 3:14 AM on a Tuesday in 2019. I was on-call for a payment processing gateway. Our Node.js worker fleet started dropping like flies. The error? ERR_S3_UPLOAD_FAILED followed immediately by an OOM-killed signal from the Kubelet. I spent four hours digging through heap dumps only to realize a “senior” dev had implemented a “clean” logging middleware. It captured the entire request buffer in a closure to “ensure we had context” if an upload failed. Under high load, those closures stayed in the heap, the garbage collector (GC) panicked, and the event loop lagged so hard the health checks failed. We lost $40k in processed volume because someone wanted “pretty” logs.
That is the reality of JavaScript. It is a language that makes it incredibly easy to shoot yourself in the foot while feeling like you’re doing everything right. Most “javascript best” guides focus on where to put your curly braces or why you should use const over let. I don’t care about your linter settings. I care about why your Promise.all is DDOSing your own database and why your Map is leaking memory like a sieve. This isn’t a guide for beginners; it’s a survival manual for people running JS in production.
The Async/Await Trap: Parallelism vs. Concurrency
The biggest lie we tell juniors is that async/await makes asynchronous code “look synchronous.” It doesn’t. It just hides the complexity until it explodes in your face. Most developers use Promise.all() because they heard it’s faster. It is. It’s also a great way to exhaust your connection pool to api.stripe.com or your Postgres instance.
// The "Clean Code" way that kills your DB
const processOrders = async (orders) => {
await Promise.all(orders.map(async (order) => {
const details = await db.query('SELECT * FROM orders WHERE id = $1', [order.id]);
await shipOrder(details);
}));
};
If orders has 1,000 items, you just fired 1,000 concurrent database queries. Your pg pool size is likely 10. You now have 990 promises sitting in the microtask queue, timing out, and consuming memory. In a real production environment, you need controlled concurrency. Use a library like p-limit or a simple batching pattern.
- Batching: Process items in chunks of 10 or 20.
- Timeouts: Never, ever use
fetchor a DB driver without asignalfrom anAbortController.
Pro-tip: If you don’t set a
timeouton your network requests, you are effectively giving your dependencies permission to hang your entire process indefinitely.
// The SRE-approved way
import pLimit from 'p-limit';
const limit = pLimit(10); // Only 10 concurrent operations
const controller = new AbortController();
const timeoutId = setTimeout(() => controller.abort(), 5000);
try {
const tasks = orders.map(order => limit(() =>
fetch(`https://api.stripe.com/v1/charges/${order.id}`, { signal: controller.signal })
));
const results = await Promise.all(tasks);
} finally {
clearTimeout(timeoutId);
}
Memory Leaks: Closures and the V8 Heap
JavaScript is garbage-collected, which gives developers a false sense of security. I’ve seen 32GB RAM nodes crash because of a 50-line script. The most common culprit? Closures that capture large objects in the outer scope. When you define a function inside another function, it keeps a reference to the variables in its parent scope. If that inner function is stored (e.g., in an event listener or a setInterval), the parent scope’s variables can’t be GC’d.
Consider this disaster I found in a production telemetry service:
function monitorService() {
const largeMetadata = new Array(1000000).fill('some-heavy-string');
return function logStatus() {
// This function doesn't even use largeMetadata!
// But because it's in the same scope, largeMetadata is leaked.
console.log("Service is up");
};
}
const statusLogger = monitorService();
setInterval(statusLogger, 1000);
V8 is smart, but it’s not a psychic. In many cases, it will retain largeMetadata because statusLogger is still reachable. To fix this, you have to be explicit. Nullify references you no longer need. Use WeakMap or WeakSet if you need to associate data with objects without preventing their collection.
The Hidden Cost of Objects:
An empty object in V8 isn’t “free.” It carries overhead for its hidden class and properties. If you’re storing millions of small objects (like coordinates or timestamps), use TypedArrays (e.g., Float64Array). It’s the difference between a 500MB heap and a 50MB heap.
Error Handling: “Try/Catch” is Not a Strategy
If I see one more catch (e) { console.error(e); } in a pull request, I’m going to lose it. Swallowing errors is the fastest way to create a “Heisenbug” that only appears in production and leaves no trace in the logs. In a distributed system, an error without context is useless.
JavaScript now supports the cause property in the Error constructor. Use it. It allows you to wrap low-level errors (like a TCP timeout) in high-level business logic (like “Failed to process checkout”) without losing the original stack trace.
async function getStripeCustomer(customerId) {
try {
return await stripe.customers.retrieve(customerId);
} catch (err) {
throw new Error(`Failed to fetch customer ${customerId} from Stripe`, { cause: err });
}
}
When this fails, your logs will show the full chain. You’ll see exactly which customer failed and the underlying 401 or 500 error from Stripe. Without cause, you just get “Error: Failed to fetch customer,” and you’re stuck grepping logs for an hour.
The “node_modules” Liability
We need to talk about the npm install addiction. Every dependency you add is a liability. It’s a security risk (remember left-pad or event-stream?), a performance hit, and a maintenance nightmare. I’ve seen projects where node_modules was 1.2GB for a simple CRUD API. That’s 1.2GB of code that needs to be parsed by the runtime on every cold start.
The SRE Audit:
1. Do you need Lodash? No. Modern JS has Array.map, filter, reduce, flat, and Object.fromEntries. If you only need cloneDeep, write a utility or use structuredClone().
2. Do you need Moment.js? Absolutely not. It’s a bloated, mutable mess. Use date-fns or the native Intl object.
3. Check for duplicates: Use npm dedupe. I once found three different versions of request (which is deprecated!) in a single dependency tree.
Note to self: Always run
npm auditin the CI/CD pipeline. But don’t trust it blindly. It misses a lot of “malicious but technically valid” code changes.
TypeScript: “Any” is a Production Incident
TypeScript is not about making the code look pretty; it’s about creating a contract. When you use any, you are breaking that contract and lying to your future self. I’ve seen any cause production outages where a function expected a string but got null, leading to a TypeError: Cannot read property 'split' of null at 4 PM on a Friday.
If you don’t know the type, use unknown. It forces you to perform type checking before you do anything dangerous with the variable.
// Dangerous
function processData(input: any) {
console.log(input.name.toUpperCase()); // Boom if input is null
}
// Safe
function processData(input: unknown) {
if (input && typeof input === 'object' && 'name' in input && typeof input.name === 'string') {
console.log(input.name.toUpperCase());
}
}
Yes, it’s more verbose. That’s the point. It forces you to handle the “edge cases” that are actually just “reality.”
The Event Loop: Don’t Block the Heartbeat
JavaScript is single-threaded. If you block the event loop, you block everything. I once saw a dev run JSON.parse() on a 50MB string inside an Express route. Every time that route was hit, the entire server stopped responding to all other requests for 200ms. Under load, the health check failed, the load balancer pulled the node, and the remaining nodes took the extra traffic, hit the same route, and died. A classic cascading failure.
- Synchronous APIs: Avoid
fs.readFileSyncorJSON.parseon large blobs in the hot path. - CPU Intensive Tasks: If you need to calculate a hash or process an image, offload it to a
Worker Thread. - SetImmediate vs NextTick: Use
setImmediateto break up long-running loops.process.nextTickfires before the event loop continues, so if you use it recursively, you will starve the I/O.
// How to kill your server
function heavyTask() {
for (let i = 0; i < 1000000000; i++) {
// Blocking the loop for seconds
}
}
// How to be a good citizen
function heavyTask(iteration = 0) {
if (iteration >= 1000) return;
// Do a chunk of work
doWork();
// Yield back to the event loop so it can handle I/O
setImmediate(() => heavyTask(iteration + 1));
}
The “Real World” Gotcha: The Buffer Heap
In Node.js, Buffer objects are allocated outside the V8 heap in “External Memory.” This is why your process might show 1GB of RAM usage in top, but your V8 heap metrics show only 200MB. If you are handling file uploads or streaming data, you need to monitor external memory specifically. I’ve seen many teams ignore this and wonder why their containers are getting OOM-killed when the “heap usage” looks fine.
const used = process.memoryUsage();
console.log(`Heap Used: ${used.heapUsed / 1024 / 1024} MB`);
console.log(`External: ${used.external / 1024 / 1024} MB`);
console.log(`RSS: ${used.rss / 1024 / 1024} MB`);
If RSS (Resident Set Size) is climbing but heapUsed is flat, you have a Buffer or a C++ addon leak. This is common when using older versions of grpc or custom image processing libraries.
Performance: Hidden Classes and Deoptimizations
V8 tries to optimize your code by creating “hidden classes” for your objects. If you change the shape of an object (by adding or deleting properties) after it’s been created, V8 “deoptimizes” that code. It drops back to a slower, interpreted mode.
// V8 loves this
function User(id, name) {
this.id = id;
this.name = name;
}
const user1 = new User(1, 'Alice');
const user2 = new User(2, 'Bob');
// V8 hates this
const user3 = { id: 3 };
user3.name = 'Charlie'; // Shape change!
delete user3.id; // Shape change again! V8 gives up.
In a high-frequency loop, this can be the difference between 10k operations per second and 100k. Always initialize your objects with all their fields, even if they are null or undefined. Never use delete; set the value to undefined instead.
The Wrap-up
Stop chasing the latest framework or the “cleanest” syntax. JavaScript in production is a game of resource management, error propagation, and understanding the underlying runtime. If you treat your code like a delicate piece of art, it will break the moment it hits real traffic. Treat it like a plumbing system: it needs to handle pressure, it needs clear drainage for errors, and you need to know exactly where the leaks are. Stop obsessing over “javascript best” and start obsessing over “javascript stable.”
Related Articles
Explore more insights and best practices: