Open Source Projects Splinter into Opposing Camps as AI-Generated Contributions Force a Governance Reckoning
A RedMonk study of 32 organizations reveals a fragmented policy landscape as projects from cURL to the Linux kernel draw competing lines on AI code.
Overview
The open-source ecosystem is fracturing along a new fault line: what to do about code written by artificial intelligence. A RedMonk analysis published on February 26 surveyed 32 open-source organizations and found no consensus emerging. Instead, the community is splintering into at least three distinct camps — outright bans, cautious conditional acceptance, and permissive frameworks — each reflecting fundamentally different assumptions about quality, copyright, and the social contract that holds volunteer-maintained software together.
The study arrives at a moment of acute pressure. Maintainers across the ecosystem report being overwhelmed by a surge of low-quality, AI-generated pull requests and bug reports that consume review time without delivering value. Several high-profile projects have responded with drastic measures, while others have crafted nuanced middle-ground policies. The result is a patchwork governance landscape that could define how billions of lines of critical infrastructure code are maintained for years to come.
The Hard-Ban Camp
A growing number of projects have concluded that AI-generated code is simply not worth the risk. Gentoo Linux became the first major distribution to impose an outright ban in April 2024, citing copyright uncertainty, quality concerns, and environmental objections. NetBSD followed with a policy classifying all LLM-generated code as “tainted” — a designation originally reserved for code of uncertain licensing provenance — and requiring explicit core team approval before any such code can be committed, as reported by The Register. The Servo web browser engine prohibits all contributions generated by “LLM or other probabilistic tools, including but not limited to Copilot or ChatGPT,” covering both code and documentation.
FreeBSD has been moving toward a similarly cautious stance. As of September 2025, the project’s core team was still developing a formal policy but had indicated its intent to bar AI-generated source code while carving out a narrow exception for documentation and translation work, where LLMs can accelerate output without introducing the same categories of risk, according to The Register. The core team cited license concerns as the primary obstacle, noting that LLM output “comes from content from all kinds of sources that has previously been copied without consent.”
These bans are not theoretical. They respond to concrete problems. Daniel Stenberg, creator of cURL, shut down the project’s six-year bug bounty program in January 2026 after AI-generated submissions reached approximately 20 percent of all reports, with valid submission rates collapsing to just 5 percent. In one 16-hour stretch, the project received seven submissions — many describing vulnerabilities that did not exist.
The Conditional Middle Ground
Between outright prohibition and open acceptance sits an increasingly sophisticated set of “human-in-the-loop” policies. The LLVM compiler infrastructure project adopted this approach in January 2026, requiring that autonomous AI submissions be banned entirely while permitting AI-assisted contributions only when a human contributor has reviewed, understood, and can independently explain the code. Contributions containing substantial AI-generated content must be labeled, and AI tools cannot be used for issues marked “good first issue” — preserving those as learning opportunities for new developers.
The Electronic Frontier Foundation introduced its own policy on February 19, applying it to four open-source projects: Certbot, Privacy Badger, Boulder, and Rayhunter. The EFF will accept LLM-generated code but requires human-authored documentation, mandatory disclosure of AI tool usage, and proof that contributors understand what they are submitting. “LLMs excel at producing code that looks mostly human generated, but can often have underlying bugs that can be replicated at scale,” wrote EFF staff members Samantha Baldwin and Alexis Hancock. As The Register noted, the EFF acknowledged that “banning a tool is against our general ethos,” but emphasized that these tools carry systemic problems spanning privacy, censorship, and environmental concerns.
The Linux kernel has taken a pragmatic approach. NVIDIA engineer Sasha Levin proposed guidelines stipulating that AI can collaborate on kernel development, but the legally binding Signed-off-by certification — which declares the contributor has the right to submit the code — must remain exclusively human.
The Permissive Framework
At the other end of the spectrum, the Linux Foundation’s generative AI policy treats AI-generated contributions “no differently” than traditionally written code, requiring only that contributors verify their AI tool’s terms do not conflict with the project’s open-source license and that any third-party content is properly attributed. Individual Linux Foundation projects may establish more restrictive rules, but the foundation-level guidance defaults to acceptance.
The Apache Software Foundation and Eclipse Foundation have opted for labeling regimes rather than bans, recommending “Generated-by:” tags in commit messages or disclaimers beneath copyright headers, respectively. The OpenInfra Foundation distinguishes between “Generated-By” and “Assisted-By” labels, treating AI code as coming from an “untrusted source” that requires additional scrutiny.
The Deeper Crisis: Maintainer Sustainability
Beneath the policy debate lies a more fundamental challenge. The RedMonk research on “AI Slopageddon” documented how AI-generated contributions are eroding the implicit social contract between maintainers and contributors. Historically, the effort required to write code and understand a codebase served as a natural quality filter. AI removes that friction, allowing anyone to generate plausible-looking submissions with minimal comprehension.
The consequences extend beyond wasted review time. Mitchell Hashimoto, creator of Ghostty, implemented a zero-tolerance policy in January 2026 after experiencing the phenomenon firsthand: “This is not an anti-AI stance. This is an anti-idiot stance,” he told InfoQ. Steve Ruiz of tldraw went further, auto-closing all external pull requests entirely — concluding that maintainers can now generate fixes themselves faster than they can review AI-assisted external contributions.
Seth Larson of the Python Software Foundation captured the burnout dimension, warning that “wasting precious volunteer time doing something you don’t love and in the end for nothing is the surest way to burn out maintainers,” as quoted in RedMonk’s analysis.
The Security Dimension
The governance fragmentation has direct security implications. The Open Source Security Foundation convened its Package Manager Security Forum on February 2, bringing together registry operators from major ecosystems to address shared challenges. The OpenSSF’s Vulnerability Disclosures Working Group is now developing best practices guidelines specifically for projects impacted by AI slop, while its AI/ML Security Working Group has established collaboration with NIST, OWASP, and other standards bodies.
The concern is not abstract. When low-quality AI-generated bug reports describe non-existent vulnerabilities, they consume the same review resources that would otherwise catch real security flaws. When AI-generated patches introduce subtle bugs replicated at scale — a pattern the EFF explicitly warned about — they can propagate through dependency chains that underpin critical infrastructure.
What We Don’t Know
Several important questions remain unanswered. No reliable method exists to consistently detect AI-generated code, making enforcement of any policy — whether ban or disclosure requirement — ultimately dependent on contributor honesty. The copyright status of LLM output remains legally untested in most jurisdictions, leaving projects that accept AI contributions exposed to future liability shifts. And the long-term effects on contributor pipelines are unclear: if “good first issue” protections like LLVM’s become widespread, they may preserve onramps for new developers, but they cannot address the broader erosion of engagement that sustains volunteer-driven projects.
Perhaps most critically, the policy fragmentation itself creates coordination challenges. A contributor who learns one project’s rules may unknowingly violate another’s, and the absence of any cross-ecosystem standard means each project must independently invest governance resources in a problem that affects them all.
Analysis
The RedMonk study’s finding of 32 distinct organizational approaches is less a snapshot of diversity than a signal of institutional failure. Open source has historically relied on shared norms — coding standards, licensing conventions, review culture — that transcended individual projects. The AI contribution question is fracturing that commons precisely because the underlying tensions (quality, copyright, sustainability) have no clean technical resolution.
The projects choosing outright bans are making a bet that enforcement difficulty is outweighed by the clarity of the signal: do not waste maintainer time. The projects choosing conditional acceptance are betting that disclosure and human-in-the-loop requirements can preserve code quality while acknowledging that AI tools are now embedded in developer workflows. The permissive camp is betting that existing review processes will catch problems regardless of how code was generated.
All three bets carry risk. But the most dangerous outcome may be the one already unfolding: a fragmented landscape where the governance burden falls on individual maintainers — the same volunteers already reporting burnout — rather than on the platforms and foundations with the resources to develop shared solutions. OpenUK CEO Amanda Brock may have captured the trajectory best when she predicted, as quoted by The Register: “We’re gonna see a lot more AI red cards in the coming weeks.”