Adventures in Nodeland logo

Adventures in Nodeland

Subscribe
Archives
October 9, 2025

From fast-redact to slow-redact

Introducing slow-redact: a safer log redaction tool with immutability guarantees, combating flawed CVEs.

Today I'm announcing slow-redact, a new package that provides the same API as fast-redact but with a crucial difference: immutability guarantees. This package was born out of necessity after a spurious CVE filing against fast-redact and our decision to prioritize safety in the pino ecosystem.

Protecting Sensitive Data in Logs

Log redaction is a critical security feature for production logging systems because applications routinely process sensitive data that must never appear in logs—including passwords, API keys, authentication tokens, credit card numbers, and personally identifiable information (PII). Without proper redaction, this sensitive data can be exposed in log files, which are often stored in less secure locations, shared across teams, sent to third-party monitoring services, or retained for long periods. A single leaked password or API key in a log file can compromise an entire system, violate data privacy regulations like GDPR or HIPAA, and create serious security vulnerabilities. Effective redaction ensures that sensitive information is automatically removed or masked before logs are written, allowing developers to maintain comprehensive logging for debugging and monitoring while protecting user privacy and system security.

Pino has shipped log redaction since the early days.

Why the Switch?

On September 23rd, 2025, CVE-2025-57319 was filed against fast-redact, claiming a "Prototype Pollution vulnerability" that could cause denial of service. However, this CVE is fundamentally flawed.

The vulnerability report demonstrates the issue by calling an internal, undocumented utility function (nestedRestore) directly:

// This is NOT how you use fast-redact
require("fast-redact/lib/modifiers").nestedRestore(instructions);

This is like claiming a car is unsafe because you can crash it if you remove the wheels while driving. When you use fast-redact through its public API, this vulnerability doesn't exist:

// This is the correct usage - no vulnerability
const fastRedact = require("fast-redact");
const redact = fastRedact({
  paths: ['polluted.prototype.constructor'],
});
console.log(redact({ polluted: { prototype: { constructor: false } } }));
// Output: {"polluted":{"prototype":{"constructor":"[REDACTED]"}}}

David Mark Clements and I have disputed this CVE with MITRE, but the damage is done. Security scanners will flag this, and corporate security teams will panic. More details in https://github.com/davidmarkclements/fast-redact/issues/75.

The Real Problem

The bigger issue isn't this specific CVE - it's that the CVE system itself has become unreliable. When someone can file a vulnerability report against internal utility functions that no sane developer would ever call directly, the entire system loses credibility. It becomes a tool for arbitrary ecosystem control rather than legitimate security improvement.

But rather than fight this broken system, I decided to build a better solution.

Enter slow-redact

slow-redact provides the exact same API as fast-redact but with a crucial architectural difference: it never mutates the original object. Instead, it uses innovative selective cloning that provides immutability guarantees while maintaining competitive performance.

const slowRedact = require('slow-redact');

const redact = slowRedact({
  paths: ['headers.cookie', 'user.password']
});

const obj = {
  headers: { cookie: 'secret', 'x-request-id': '123' },
  user: { name: 'john', password: 'secret123' }
};

const result = redact(obj);
// Result: {"headers":{"cookie":"[REDACTED]","x-request-id":"123"},"user":{"name":"john","password":"[REDACTED]"}}

// Original object is completely unchanged
console.log(obj.headers.cookie); // 'secret'

Key Advantages

  1. Immutability: Original objects are never modified
  2. Performance: Competitive with fast-redact for real-world usage patterns
  3. Memory Efficiency: Selective cloning shares references for non-redacted data
  4. Full API Compatibility: Drop-in replacement for fast-redact
  5. Safety: No mutation means no mutation-based vulnerabilities

Performance: Not Actually Slow

Despite the name, slow-redact is performance-competitive with fast-redact for typical usage:

Operation Type slow-redact fast-redact Ratio
Large objects (minimal redaction) ~18μs ~17μs ~same
Large objects (wildcards) ~48μs ~37μs 1.3x slower
Small objects ~690ns ~200ns ~3.5x slower

For large objects with selective redaction (the common pino use case), performance is essentially identical. The name "slow-redact" is intentionally provocative - it challenges the assumption that "fast" always means "better."

Why Pino is Switching

In pino, we log objects that might be shared across multiple contexts. Mutating these objects can cause subtle bugs and unpredictable behavior. With slow-redact, we get:

  • Predictable behavior: Original objects never change
  • Debugging safety: Can compare before/after redaction
  • Functional programming compatibility: Works naturally with immutable patterns
  • Zero security concerns: No mutation-based attack vectors

This has shipped as pino 9.13.0.

The Technical Innovation: Selective Cloning Explained

slow-redact achieves competitive performance through selective cloning: an approach that only clones object branches that contain redaction targets while sharing references for everything else.

How Traditional Deep Cloning Works (Inefficient)

Most immutable redaction approaches use deep cloning:

// Traditional approach: Clone everything, then redact
const deepClone = obj => JSON.parse(JSON.stringify(obj)); // Simplified
const result = deepClone(originalObject);
redact(result, paths);

This creates entirely new objects for everything, consuming massive memory and CPU cycles for large objects with minimal redaction.

How Selective Cloning Works (Efficient)

slow-redact analyzes redaction paths and only clones the specific branches that need modification:

Original Object Structure:
┌─────────────────────────────────────────┐
│ {                                       │
│   database: { host: "...", port: 5432 } │  ← Large config (not redacted)
│   api: { endpoints: [...] }             │  ← Large config (not redacted)
│   cache: { redis: {...} }               │  ← Large config (not redacted)
│   user: {                               │  ← Contains redaction target
│     name: "john",                       │
│     password: "secret123"               │  ← REDACT THIS
│   }                                     │
│ }                                       │
└─────────────────────────────────────────┘

Selective Cloning Process:
┌─────────────────────────────────────────┐
│ Path Analysis:                          │
│ - "user.password" requires cloning      │
│   the "user" branch only                │
│                                         │
│ Result Structure:                       │
│ {                                       │
│   database: → [SHARED REF] ───┐         │
│   api: → [SHARED REF] ────────┼──────── │ Original object
│   cache: → [SHARED REF] ──────┘         │ references
│   user: → [NEW CLONE] {                 │
│     name: "john",                       │
│     password: "[REDACTED]"              │
│   }                                     │
│ }                                       │
└─────────────────────────────────────────┘

Memory and Performance Impact

Traditional Deep Clone:

Memory Usage: 100% new allocation
CPU Time: O(entire object size)
Result: Completely new object tree

Selective Clone:

Memory Usage: Only cloned branches (~5-20% in typical cases)
CPU Time: O(redacted paths) instead of O(entire object)
Result: Hybrid object with shared + cloned branches

Real-World Example

Consider a typical Express.js request object being logged:

const requestObj = {
  method: 'POST',
  url: '/api/users',
  headers: {
    'content-type': 'application/json',
    'authorization': 'Bearer abc123...',  // ← REDACT THIS
    'user-agent': 'Mozilla/5.0...',
    'x-forwarded-for': '192.168.1.1'
  },
  body: {
    username: 'john',
    password: 'secret123',                // ← REDACT THIS
    email: 'john@example.com'
  },
  query: {},
  params: { id: '123' }
};

const redact = slowRedact({
  paths: ['headers.authorization', 'body.password']
});

const result = redact(requestObj);

Memory sharing analysis: - method, url, query, params: Shared references (original objects) - headers: New object (contains authorization to redact) - content-type, user-agent, x-forwarded-for: Shared string references - authorization: New string "[REDACTED]" - body: New object (contains password to redact) - username, email: Shared string references - password: New string "[REDACTED]"

Result: ~85% memory sharing, ~90% reduction in allocation overhead

Moving Forward

This isn't just about one spurious CVE. It's about building more reliable, predictable software. When you choose immutability by default, entire classes of bugs simply disappear.

fast-redact remains an excellent choice when you control the object lifecycle and need absolute maximum performance. But for most applications - especially those dealing with shared objects or requiring predictable behavior - slow-redact is the better choice.

The pino ecosystem is moving to slow-redact as the default. It's a small change with big implications for reliability and developer confidence.

Sometimes the best way forward isn't to fight the broken system - it's to build a better one.

slow-redact is available on npm: npm install slow-redact. GitHub: https://github.com/pinojs/slow-redact.

Don't miss what's next. Subscribe to Adventures in Nodeland:
GitHub X YouTube LinkedIn