Adventures in Nodeland logo

Adventures in Nodeland

Subscribe
Archives
October 21, 2025

Noop Functions vs Optional Chaining: A Performance Deep Dive

Discover why noop functions are significantly faster than optional chaining in JavaScript!

Hi Folks,

This week I want to talk about something that might surprise you: the performance cost of optional chaining in JavaScript. A question came up recently about whether using a noop function pattern is faster than optional chaining, and the answer might make you rethink some of your coding patterns.

After a pull request review I did, Simone Sanfratello created a comprehensive benchmark to verify some of my thinking on this topic, and the results were eye-opening.

The Setup

Let's start with a simple scenario. You have two approaches to handle optional function calls:

// Approach 1: Noop function
function noop() {}
function testNoop() {
  noop();
}

// Approach 2: Optional chaining
const a = {}
function testOptionalChaining() {
  a.b?.fn?.();
}

Both accomplish the same goal: they execute safely without throwing errors. But how do they compare performance-wise?

The Numbers Don't Lie

Simone and I ran comprehensive benchmarks with 5 million iterations to get precise measurements. The results were striking:

Test Case Ops/Second Relative to Noop
Noop Function Call 939,139,797 Baseline
Optional Chaining (empty object) 134,240,361 7.00x slower
Optional Chaining (with method) 149,748,151 6.27x slower
Deep Optional Chaining (empty) 106,370,022 8.83x slower
Deep Optional Chaining (with method) 169,510,591 5.54x slower

Yes, you read that right. Noop functions are 5.5x to 8.8x faster than optional chaining operations.

Why Does This Happen?

The performance difference comes down to what the JavaScript engine needs to do:

Noop function: Simple function call overhead. The V8 engine optimizes this extremely well - it's just a jump to a known address and back. In fact, V8 will inline trivial functions like noop, making them essentially zero-overhead. The function call completely disappears in the optimized code.

Optional chaining: Property lookup, null/undefined check, potentially multiple checks for chained operations, and then the function call. Each ?. adds overhead that V8 can't optimize away because it has to perform the null/undefined checks at runtime.

The deeper your optional chaining, the worse it gets. Triple chaining like a?.b?.c?.fn?.() is about 1.17x slower than single-level optional chaining.

A Real-World Pattern: Fastify's Logger

This is exactly why Fastify uses the abstract-logging module. When no logger is provided, instead of checking logger?.info?.() throughout the codebase, Fastify provides a noop logger object with all the logging methods as noop functions.

// Instead of this everywhere in the code:
server.logger?.info?.('Request received');
server.logger?.error?.('Something went wrong');

// Fastify does this:
const logger = options.logger || require('abstract-logging');
// Now just call it directly:
server.logger.info('Request received');
server.logger.error('Something went wrong');

This is an important technique: provide noops upfront rather than check for existence later. V8 inlines these noop functions, so when logging is disabled, you pay essentially zero cost. The function call is optimized away completely. But if you use optional chaining, you're stuck with the runtime checks every single time, and V8 can't optimize those away.

The TypeScript Trap

One of the reasons we see so much unnecessary optional chaining in modern codebases is TypeScript. TypeScript's type system encourages defensive coding by marking properties as potentially undefined, even when your runtime guarantees they exist. This leads developers to add ?. everywhere "just to be safe" and satisfy the type checker.

Consider this common pattern:

interface Config {
  hooks?: {
    onRequest?: () => void;
  }
}

function processRequest(config: Config) {
  config.hooks?.onRequest?.(); // Is this really needed?
}

If you know your config object always has hooks defined at runtime, you're paying the optional chaining tax unnecessarily. TypeScript's strictNullChecks pushes you toward this defensive style, but it comes at a performance cost. The type system can't know your runtime invariants, so it forces you to check things that might never actually be undefined in practice.

The solution? Use type assertions or better type modeling when you have runtime guarantees. Here's how:

// Instead of this:
config.hooks?.onRequest?.();

// Do this if you know hooks always exists:
config.hooks!.onRequest?.();

// Or even better, fix the types to match reality:
interface Config {
  hooks: {
    onRequest?: () => void;
    onResponse?: () => void;
  }
}

// Now you can write:
config.hooks.onRequest?.();

// Or if you control both and know onRequest exists, use a noop:
const onRequest = config.hooks.onRequest || noop;
onRequest();

Don't let TypeScript's pessimistic type system trick you into defensive code you don't need.

The Real-World Context

Before you rush to refactor all your optional chaining, let me add some important context:

Even the "slowest" optional chaining still executes at 106+ million operations per second. For most applications, this performance difference is completely negligible. You're not going to notice the difference unless you're doing this in an extremely hot code path.

Memory usage is also identical across both approaches - no concerns there.

My Recommendation

Don't premature optimize. Write your code with optional chaining where it makes sense for safety and readability. For most Node.js applications, including web servers and APIs, optional chaining is perfectly fine. The safety and readability benefits far outweigh the performance cost in 99% of cases.

However, noop functions make sense when you're in a performance-critical hot path or every microsecond counts. If you control the code and can guarantee the function exists, skipping the optional chaining overhead is a clear win. Think high-frequency operations, tight loops, or code that runs thousands of times per request. Even at a few thousand calls per request, that 5-8x performance difference starts to add up.

If profiling shows that a specific code path is a bottleneck, then consider switching to noop functions or other optimizations. Use optional chaining for dealing with external data or APIs where you don't control the structure, and use it in normal business logic where code readability and safety are priorities.

Remember: readable, maintainable code is worth more than micro-optimizations in most cases. But when those microseconds matter, now you know the cost.

Thanks to Simone Sanfratello for creating the benchmarks that confirmed these performance characteristics!

Don't miss what's next. Subscribe to Adventures in Nodeland:
GitHub X YouTube LinkedIn