Adventures in Nodeland logo

Adventures in Nodeland

Archives
Subscribe
January 18, 2026

The Human in the Loop

"AI has changed my coding process, but I stand by my role in analyzing each AI-generated output."

Mike Arnaldi wrote a thought-provoking piece titled "The Death of Software Development." I respect Mike a lot. Effect is brilliant work, and his analysis of the current AI moment is sharper than most. But I think he's missing something critical.

My Workflow Has Changed

Let me be clear: I'm not here to argue that AI isn't transforming our industry. It is. My own workflow has changed dramatically.

When an issue lands in my queue today, my first instinct is to throw it at AI. Security vulnerabilities in Node.js or Undici. Bugs in Fastify. New features for Platformatic. AI handles the implementation. I've shipped dozens of fixes this way in the past few months.

But here's the thing Mike glosses over: I review every single change. Every behavior modification. Every line that ships.

The Bottleneck Has Shifted

Mike writes that he built a Polymarket analysis tool in 2 hours, writing zero lines of code, reviewing zero lines of code. He presents this as a triumph.

I see it differently.

My ability to ship is no longer limited by how fast I can code. It's limited by my skill to review. And I think that's exactly how it should be.

When I fix a security vulnerability, I'm not just checking if the tests pass. I'm asking: does this actually close the attack vector? Are there edge cases the AI missed? Is this the right fix, or just a fix? When I ship a new feature, I need to understand if it fits the architecture, if it maintains backward compatibility, if it's something I can stand behind.

The moment I stop reviewing is the moment I stop being responsible for what I ship.

The Bloomberg Terminal Question

Mike asks: "If an idiot like me can clone a product that costs $30k per month in two hours, what even is software development?"

I'd ask a different question: who's responsible when that clone has a bug that causes someone to make a bad trade? Who understands the edge cases? Who can debug it when it breaks in production at 3 AM?

The Bloomberg Terminal isn't expensive because the code is hard to write. It's expensive because there are people who understand financial markets, regulatory requirements, data integrity, and system reliability standing behind it. People who have spent years building mental models about what can go wrong.

Forty Years of Practices Need Rethinking

Mike is absolutely right about one thing: forty years of best practices are now outdated. The patterns we relied on, the team structures we built, the processes we followed. All of it needs to be reconsidered.

Code review processes designed for human-written code? Need rethinking. Sprint planning based on human typing speed? Obsolete. The assumption that more developers means more output? Questionable.

I've been in the Node.js ecosystem long enough to see "best practices" come and go. But this is different. This isn't a new framework or a new paradigm. This is a fundamental shift in how code gets produced. Anyone pretending we can keep doing things the old way is in denial.

Software Engineers and Architects Are More Crucial Than Ever

Mike argues that while software development is dead, software engineering is alive. I completely agree. Engineers are now "designing higher-order systems" and "building techniques." The role of software engineers and architects is more crucial than ever.

What's gone is the role of the programmer who takes a task from Jira, does it, and clocks off for the day. That job is gone. AI can do that now, faster and cheaper.

But I think there's more to it. Reviewing and assessing code done by others is what we have done in open source since forever. As a maintainer of Node.js, Fastify, Pino, and Undici, and as Chair of the Node.js Technical Steering Committee, I spend most of my time reviewing pull requests from contributors I've never met. I don't write most of the code that ships. I review it, I assess it, I decide if it's good enough. This isn't new to me. AI is just another contributor now.

I've also shipped contributions that I did not fully understand. I regret them fondly. Every maintainer has done this at some point. And every time, it comes back to bite you. The bugs are harder to fix. The behavior is harder to explain. The technical debt compounds. This is why review matters. This is why understanding matters.

Yes, I design systems. But more importantly, I provide judgment. I decide what should be built, how it should behave, and whether the implementation matches the intent. I catch the cases where the AI confidently produces something that looks right but isn't. I understand the context that no prompt can fully capture.

This isn't a new skill. It's the same skill senior engineers have always had. The difference is that now it's the primary skill, not one of many.

The Real Question

Mike says that "the average software developer is not even close to understanding the extent of this change." I agree. But I think the misunderstanding cuts both ways.

Some developers underestimate AI. They think their job is safe because AI makes mistakes. They're wrong. AI is already good enough to handle a huge portion of routine coding work.

But some AI enthusiasts overestimate the transformation. They think the human in the loop is a temporary limitation, a bottleneck to be optimized away. I think they're wrong too.

The human in the loop isn't a limitation. It's the point.

When I ship code, my name is on it. When there's a security vulnerability in Undici or a bug in Fastify, it's my responsibility. I can use AI to help me move faster, but I cannot outsource my judgment. I cannot outsource my accountability.

What I'm Actually Worried About

My worry isn't that software development is dying. It's that we'll build a culture where "I didn't review it, the AI wrote it" becomes an acceptable excuse.

I've been maintaining open source projects for over a decade. I've seen what happens when people ship code they don't understand. It's not pretty. And the scale of damage possible when you can generate code at AI speed is much larger than when you're limited by typing speed.

The Industrial Revolution comparison is apt, but not in the way Mike suggests. The Industrial Revolution didn't just make goods abundant. It also created new categories of industrial accidents, new forms of pollution, new ways for things to go wrong at scale. It took decades to develop the safety practices, regulations, and cultural norms to handle industrial-scale production responsibly.

We're at the beginning of that process for AI-generated software. And the answer isn't to remove humans from the loop. It's to get much better at the review part.

The Path Forward

I'm not arguing against using AI. I use it every day. I'm more productive than I've ever been.

But I've accepted that my bottleneck is now review, not coding. And I'm working on getting better at it. Faster pattern recognition. Better mental models for common failure modes. More efficient ways to verify behavior.

This is the skill that matters in 2026. Not prompting. Not "agentic infrastructure." Judgment.

Mike's right that things are changing fast. But the human in the loop isn't a bug to be fixed. It's a feature to be protected.

Don't miss what's next. Subscribe to Adventures in Nodeland:
Share this email:
Share on Twitter Share on LinkedIn
GitHub
Twitter
YouTube
LinkedIn