JavaScript Best Practices: Master Clean & Efficient Code

Stop Writing JavaScript Like It’s 2015: A Senior SRE’s Guide to Code That Doesn’t Wake Me Up at 3 AM

In 2019, I was paged at 3:14 AM because a critical microservice handling Stripe webhooks was stuck in a crash loop. The logs showed nothing but a cryptic OOM-killed message from the Kubelet. We had scaled the pods from 512MB to 4GB of RAM, and the service still ate it all in under ten minutes. After four hours of profiling the heap in a frantic war room, I found the culprit: a “simple” global Map used to cache idempotency keys. It had no TTL, no size limit, and was being populated by a forEach loop that didn’t await its internal promises. We weren’t just leaking memory; we were creating a massive backlog of unhandled rejections that the V8 engine couldn’t garbage collect fast enough.

The developer who wrote it followed every “clean code” tutorial on the internet. They used const, they used arrow functions, and they used template literals. But they didn’t understand how the event loop actually handles backpressure or how the V8 garbage collector treats long-lived objects. This is the problem with most “javascript best” practices you find online. They focus on how the code looks to a human, not how it behaves under load in a Linux container. If your code looks “clean” but triggers a SIGTERM every time your traffic spikes by 20%, your code is garbage.

The Myth of “Clean” JavaScript

Most documentation is written for the “happy path.” It assumes your network is reliable, your dependencies are secure, and your memory is infinite. In reality, the “javascript best” approach isn’t about using the latest syntax sugar from the TC39 committee. It’s about predictability. I don’t care if you use async/await or .then(), as long as you know exactly what happens when the underlying TCP socket hangs. Most tutorials ignore the fact that Node.js is a single-threaded runtime where one bad JSON.parse() on a 50MB string can block the entire event loop for 200ms, spiking your p99 latency and triggering a cascading failure across your service mesh.

We need to stop treating JavaScript as a scripting language for browsers and start treating it as a systems language for high-throughput environments. That means moving away from “how do I write this” to “how does this fail.”

1. Defensive Programming with Zod and AbortController

If you are still using process.env.API_KEY directly in your code, you are asking for a production outage. I’ve seen services start up, run for ten minutes, and then crash because a specific code path finally hit an undefined environment variable. The “javascript best” way to handle configuration and external data is through strict schema validation at the boundaries.


import { z } from 'zod';

const envSchema = z.object({
  STRIPE_SECRET_KEY: z.string().min(1),
  LOG_LEVEL: z.enum(['debug', 'info', 'warn', 'error']).default('info'),
  MAX_RETRIES: z.coerce.number().int().positive().default(3),
});

// Fail fast. If the env is wrong, the pod shouldn't even start.
const env = envSchema.parse(process.env);

export type Env = z.infer<typeof envSchema>;

Validation isn’t just for environment variables. It’s for every fetch call. If api.stripe.com changes their response format (unlikely, but possible) or a middlebox injects an error page, your code should catch it before it pollutes your internal state.

Furthermore, stop making network requests without a timeout. A hanging socket is a silent killer. It consumes a file descriptor and keeps the event loop active. Use AbortController. It is now a first-class citizen in Node.js and the browser.


async function fetchWithTimeout(url: string, timeoutMs = 5000) {
  const controller = new AbortController();
  const timeoutId = setTimeout(() => controller.abort(), timeoutMs);

  try {
    const response = await fetch(url, { signal: controller.signal });
    if (!response.ok) {
      throw new Error(`HTTP error! status: ${response.status}`);
    }
    return await response.json();
  } finally {
    clearTimeout(timeoutId);
  }
}

Pro-tip: Always clear your setTimeout in the finally block. If the request finishes in 10ms, you don’t want that timer object sitting in the heap for the remaining 4990ms.

2. The Event Loop is Not Your Friend

JavaScript’s greatest strength—its non-blocking I/O—is also its biggest foot-gun. I see developers doing CPU-intensive work inside the main thread all the time. They think because they are using async, it’s “backgrounded.” It’s not. async only helps with I/O. If you are calculating a bcrypt hash, processing a large image, or even just running JSON.stringify on a massive object, you are stopping the world.

  • The 10ms Rule: If a synchronous block of code takes longer than 10ms, it belongs in a Worker Thread or a separate microservice.
  • Avoid process.nextTick: Unless you are writing a library and need to guarantee a callback runs after the current phase of the event loop, stay away. Overusing it can lead to I/O starvation.
  • Use setImmediate for chunking: If you must process a large array on the main thread, break it up.

// Bad: Blocks the loop for 500ms
function processHugeArray(data: any[]) {
  return data.map(item => heavyTransformation(item));
}

// Better: Yields back to the loop
async function processInChunks(data: any[]) {
  const results = [];
  const chunkSize = 100;
  for (let i = 0; i < data.length; i += chunkSize) {
    const chunk = data.slice(i, i + chunkSize);
    results.push(...chunk.map(item => heavyTransformation(item)));
    await new Promise(resolve => setImmediate(resolve));
  }
  return results;
}

This allows the event loop to handle incoming HTTP requests or health checks between chunks. It increases total execution time but prevents your service from appearing “dead” to the load balancer.

3. Memory Management: Maps, Sets, and Closures

JavaScript is garbage collected, which makes developers lazy. The most common “javascript best” advice is “don’t worry about memory.” That is dangerous nonsense. In a long-running Node.js process, memory leaks are inevitable if you don’t understand references.

Consider the Map. It is often used as a cache. But unlike a WeakMap, a standard Map holds a strong reference to both the key and the value. If your key is an object, that object will never be garbage collected as long as it’s in the Map. Even if you set the key to null elsewhere in your code.


// The "Memory Leak" Special
const cache = new Map();

function getSession(user: { id: string }) {
  if (cache.has(user)) return cache.get(user);
  const session = createSession(user);
  cache.set(user, session); // 'user' object is now trapped in memory forever
  return session;
}

If you’re building a cache, use a library with a Least Recently Used (LRU) policy like lru-cache. Never, ever use a raw Object or Map as a global cache without a strictly enforced size limit. I’ve seen 2GB heaps filled with nothing but 4-year-old session metadata that was “cached for performance.”

Note to self: WeakMap is not a silver bullet. You can’t iterate over it, and you can’t use primitives (strings/numbers) as keys. It’s for metadata about objects you don’t own.

4. Logging: Console.log is a Production Anti-Pattern

If I see console.log(`User ${id} logged in`) in a pull request, I reject it immediately. Standard out is synchronous in some environments and lacks structure. When you’re searching through 4TB of logs in Datadog or ELK, you don’t want to write regex to find a user ID. You want to filter by a JSON field.

Use a structured logger like pino or winston. pino is preferred because it has the lowest overhead. Every log should include a request_id or trace_id to allow for distributed tracing.


import pino from 'pino';
const logger = pino({
  level: process.env.LOG_LEVEL || 'info',
  formatters: {
    level: (label) => ({ level: label.toUpperCase() }),
  },
});

// Real-world usage
logger.info({
  event: 'payment_processed',
  userId: 'user_29384',
  amount: 49.99,
  currency: 'USD',
  stripeId: 'ch_3Nabc123',
  latency_ms: 142
}, 'Successfully processed payment');

This structure allows you to build dashboards that show “Average latency of payments by currency” without parsing a single string. It also prevents the “string concatenation” memory overhead that comes with template literals in high-frequency logging.

5. Dependency Hell and the npm audit Lie

The JavaScript ecosystem is a house of cards built on node_modules. We’ve all seen the npm audit warnings. Most of them are false positives (ReDoS in a build-time devDependency), but the real danger is “Dependency Bloat.” Every package you add is a liability. It’s more code to parse at startup, more memory to hold in the heap, and more surface area for supply-chain attacks.

I take a hard stance here: Prefer standard library over dependencies.

  • Do you need lodash? Probably not. Modern JS has Array.prototype.flatMap, Object.fromEntries, and optional chaining.
  • Do you need moment.js? Absolutely not. Use date-fns or the native Intl object. Moment is a 200KB blob of mutable state.
  • Do you need request? It’s deprecated. Use the native fetch.

Before adding a package, run bundlephobia. If a utility library is adding 50KB to your bundle, ask yourself if you can write those three functions yourself. In the SRE world, the most reliable code is the code that isn’t there.

6. TypeScript is Mandatory (But Don’t Get Fancy)

JavaScript without types is a nightmare to maintain at scale. However, there is a trend of “Type Gymnastics” where developers spend hours writing complex generics that are impossible to read. This is not “javascript best” practice; it’s ego-driven development.

Your TypeScript should be boring. Use interface for objects, type for unions, and avoid any like the plague. If you find yourself using as unknown as TargetType, you’ve failed. You should be using a type guard or a Zod schema instead.


interface User {
  id: string;
  email: string;
}

// Type Guard: The right way to handle 'any' or 'unknown'
function isUser(input: any): input is User {
  return typeof input.id === 'string' & typeof input.email === 'string';
}

const data = await fetchFromLegacySystem();
if (isUser(data)) {
  console.log(data.email); // Typescript knows this is safe
}

Also, enable noImplicitAny, strictNullChecks, and exactOptionalPropertyTypes in your tsconfig.json. If you aren’t running in strict mode, you aren’t really using TypeScript; you’re just using expensive comments.

7. The “Gotcha”: Error Handling in Promises

Here is a snippet that has caused more outages than almost anything else:


try {
  const data = items.map(async (item) => {
    return await processItem(item);
  });
} catch (e) {
  logger.error(e);
}

This try/catch does nothing. items.map returns an array of Promises. It does not wait for them. If processItem throws, the error will be an unhandledRejection, which in modern Node.js will crash the process. You must use Promise.all() or Promise.allSettled().

But wait, Promise.all() has a “fail-fast” behavior. If one promise fails, the others keep running but you lose the reference to them. This is a memory leak waiting to happen. If you want to ensure all work is accounted for, use Promise.allSettled().


const results = await Promise.allSettled(items.map(item => processItem(item)));

for (const result of results) {
  if (result.status === 'rejected') {
    logger.error({ reason: result.reason }, 'Item processing failed');
  } else {
    logger.info({ value: result.value }, 'Item processing succeeded');
  }
}

8. Performance: The Hidden Cost of Spread Operators

We love the spread operator (...). It’s elegant. But in a hot loop, it is a performance killer. Every time you do { ...obj, newKey: 'value' }, you are creating a shallow copy of the entire object. If that object has 100 properties and you do this in a loop of 10,000 items, you are creating massive pressure on the Young Generation of the V8 heap.


// Slow: Creates 10,000 intermediate objects
const final = data.reduce((acc, item) => ({
  ...acc,
  [item.id]: item.value
}), {});

// Fast: Mutates one object
const final = {};
for (const item of data) {
  final[item.id] = item.value;
}

I know, I know. “Immutability is better.” In your Redux reducers? Sure. In a backend data-processing pipeline handling 5k requests per second? No. Performance is a feature, and sometimes that means using a for loop and mutation to avoid OOM-killing your pod.

The Real World: Why Your “Best Practices” Fail

The biggest “Gotcha” I see is the assumption that async code is always faster. I once optimized a service by removing async/await from a critical path. The overhead of the Promise state machine and the microtask queue was actually slower than just running the synchronous logic for small, fast operations.

Another expert-level gotcha: JSON.parse is blocking. If you are building a gateway that receives large JSON payloads, you are blocking the event loop every time you parse a request. If your payloads are >1MB, consider using a streaming JSON parser like stream-json. It’s more complex to write, but it keeps your p99s flat.

Finally, stop using npm audit fix. It often upgrades major versions blindly and breaks your build. Use npm audit to identify the vulnerability, then manually decide if the risk is real. If the vulnerability is in a devDependency used for linting, it is likely a non-issue for your production runtime. Don’t let a CLI tool dictate your dependency graph.

Wrap-up

Stop chasing the “cleanest” syntax and start chasing the most resilient runtime behavior; your job isn’t to write code that looks like a blog post, it’s to write code that survives a Friday night traffic spike without needing a human to intervene.

Leave a Comment