The DCO Debate: Who Is Responsible for AI-Generated Code?
Exploring the reality, benefits, challenges and accountability of AI-assisted development in the open-source community.
AI-assisted development is already a reality, and the open source community is coming together to figure out how to manage it.
Many major projects and organizations now agree on a few key ideas: people must stay accountable, being open builds trust, and existing contribution rules like the Developer Certificate of Origin (DCO) still matter. The Linux kernel community, Red Hat’s legal team, and the OpenJS Foundation have all come to similar conclusions. AI can help with development, but people are still responsible for what they submit.
This is the real situation shaping software development today. The question is no longer if we should use AI, but how to use it in a responsible way.
Against that backdrop, a recent discussion in the Node.js project explored how these principles apply in practice.
The Trigger
PR #61478 adds a Virtual File System to Node.js core, spanning amost 19,000 lines across about 80 files. I built it with Claude Code over Christmas 2025, and was open about that from the beginning. The AI handled repetitive tasks, such as implementing every fs method, setting up test coverage, and generating documentation. I focused on the architecture and API design, and I checked every line. Without AI, I wouldn’t have been able to do this as a holiday side project, let alone as part of my job as a founder and parent. It simply wouldn’t have happened, like it never happened before.
During the review, a Node.js collaborator raised concerns, not about the code’s quality or architecture, but about whether AI-assisted contributions can meet the requirements of DCO 1.1, the legal certification every Node.js contributor signs when opening a pull request. He also started a personal petition asking the Node.js TSC to vote against allowing AI-assisted development in the core project.
The Concern
The collaborator’s argument was straightforward: the DCO requires contributors to assert they have the right to submit the code, but with AI-generated code, the provenance may be unclear.
Specifically, the concern was that AI systems like Claude could have been trained on a mix of unlicensed or incompatible-licensed source code, making it difficult for a contributor to confidently assert compliance. Research showing that large language models can sometimes reproduce portions of their training data added weight to the concern about inadvertent copying.
His argument went further. Calling something an “assistive tool” does not automatically make its output license-clean. The analogy offered was cp -rf: nobody would argue that using cp -rf to copy GPL-licensed code into a contribution is DCO-compliant, even though copying is technically just another tool. The burden of proof, the argument went, lies on the submitter. That’s the whole point of the DCO.
These are important concerns that reflect broader questions the industry is still working through.
Where Does the Broader Open Source Ecosystem Stand?
While this discussion unfolded on the PR, several major organizations had already addressed these questions.
The Linux kernel community, which created the DCO, has clear policy documents about AI-assisted contributions. Their coding-assistants.rst requires a strict human-in-the-loop process. AI agents are not allowed to add Signed-off-by tags. Only humans can legally certify the DCO. The person submitting the code must review all AI-generated code, check for licensing compliance, and add their own sign-off. AI assistance must be disclosed with an Assisted-by tag. The DCO’s creators themselves say it still applies: the human signs it and takes full responsibility.
Red Hat’s CTO Chris Wright and legal counsel Richard Fontana published a detailed analysis that directly addresses the DCO question. They explained how the DCO has never been interpreted to require that every line of a contribution must be the personal creative expression of the contributor. Many contributions contain routine, non-copyrightable material, and developers still sign off on them. The real point of the DCO is responsibility. With disclosure and human oversight, AI-assisted contributions can be entirely compatible with the spirit of the DCO.
Red Hat also made a historical point that I find convincing. For years, cautious commercial users of open source worried about “laundered” code – contributions that might hide copyrighted material under unclear or problematic terms. Over time, those fears proved mostly unfounded. AI-assisted contributions could also include hidden copyrighted material, but this is a manageable risk and not very different from challenges open source has already faced and handled.
The OpenJS Foundation, Node.js’s own legal body, weighed in directly on the PR. Executive Director Robin Ginn confirmed that the foundation had checked with legal counsel and is fine with the DCO on AI-assisted contributions. They committed to formally documenting this position.
Three independent organizations – the DCO’s creators, one of the world’s largest open source legal teams, and Node.js’s own foundation – all agree on the same answer. AI does not break the DCO. What matters is accountability.
The Practical View
The DCO has never focused on how code is written. It is about whether the contributor has the right to submit the code.
If AI-generated code includes material with the wrong license, the responsibility still belongs to the contributor who signed off, just as it would if they copied code by hand. In this way, AI is not a special case; it is just another tool. Compilers change code in ways developers do not always track. Template generators create output from their own logic. Stack Overflow answers are often copied into codebases without much thought about licensing.
We have always depended on the DCO: the person who submits the code is responsible.
There is also a bigger context to consider. Like many other OSS maintainers, I have recently received open-source sponsorships from both OpenAI and Anthropic in the form of free subscriptions (I built the aforementioned PR using subscriptions paid for by myself). These companies know about the challenges open source face and want to help keep the ecosystem strong. They also trained their models on our code – on Node.js, Fastify, Undici, and the work of thousands of open source contributors. Giving back to the projects their models learned from is an important and appropriate step. This relationship, where AI companies benefit from open source and invest back into it, helps keep the ecosystem healthy. It does not solve the legal questions, but it shows that major AI labs see open source as a partner, not just a source of training data.
What This Means in Practice
The real change AI brings is not legal, but operational. AI shifts the bottleneck from writing code to reviewing it.
I have been saying this for months: having a human in the loop is a feature, not a flaw. AI moves the bottleneck to judgment and review. The person who reviews, understands, and takes responsibility for the code is doing the most important work.
This debate makes that thesis concrete.
Who Built This? I Did.
I want to make something clear: I created that VFS implementation. Saying I did not, just because I used Claude Code to assist, is not accurate. My grandmother used to make handmade pasta with the “Nonna Papera,” the pasta maker every Italian family had in their kitchen. No one would have said it was not her pasta. She chose the flour, the eggs, the thickness, and the shape. The tool only helped her hands. The pasta was hers.
I chose the architecture. I shaped the API based on feedback from all reviewers who commented on the PR. I made the design decisions, caught and fixed issues the AI introduced, and I understand what every part of the code does and why. I signed the DCO. My name is on it. If there’s a bug, it’s my responsibility. If there’s a licensing problem, I’m the one who certified compliance.
Here is a question to consider: what about the reviewers? The collaborators who reviewed the PR, suggested changes, caught edge cases, and helped shape the final implementation—are they not co-authors of this work? They always have been, in every PR in Node.js history. No one has ever doubted whether a reviewer’s contributions to a pull request are “real” just because they did not write the first version. The review process is how open source creates quality. That did not change just because the first draft came from a new kind of tool.
That is the human in the loop, not just as an idea, but as something we actually do.
The Path Forward
The solution is already taking shape in three ways.
The OpenJS Foundation is formalizing its position that AI-assisted contributions are compatible with the DCO when a human takes responsibility for them. This gives Node.js contributors clear legal ground.
The Node.js TSC will deliberate on what disclosure and attribution practices to adopt. Given the objection, this will go to a vote: this is the decision-making process that underlies the Node.js community. The human-in-the-loop model proposed by the Linux kernel is solid. Requiring an Assisted-by tag is a good model, too. Transparency about tooling helps reviewers calibrate their scrutiny and builds trust over time.
The community also needs to agree on what “human review” really means for AI-assisted contributions. It is not enough to just say, “I reviewed it.” We need to be able to answer questions like: Do you understand what this code does? Can you explain the design choices? Can you respond to feedback without asking the AI again? Can you maintain this code a year from now? These are the same questions we have always asked contributors. The tool may change, but the expectations for people do not.
The Reality for Open Source
AI-assisted development is not just a future idea: it is the reality of how more and more professional software is written today. Projects that learn how to accept AI-assisted contributions responsibly, with openness, human review, and clear accountability, will attract more contributors, move faster, and stay relevant. Projects that ban AI-assisted contributions think they might feel safer for now, but they are limiting their contributor pool just as demand for open source software is growing. I respect any project’s right to set its own rules, but I believe the projects that succeed will be those that focus on the quality of the contribution and the contributor's responsibility, not the tools they use.
The most important role in software development has not changed. It is not the person or tool that writes the code. It is the person who understands, reviews, and takes responsibility for it.
That’s what the DCO was designed to enforce. And it still does.