AI-Generated 'Slop' Is Overwhelming Open Source Projects, Forcing Emergency Countermeasures
From cURL killing its bug bounty to Godot maintainers burning out, the flood of low-quality AI-generated pull requests and security reports is forcing open source communities to fundamentally rethink how contributions are accepted.
Overview
A crisis years in the making is now visibly reshaping how open source projects operate. The same AI coding tools that have lowered the barrier to writing software have also lowered the barrier to submitting low-quality code—and the communities that maintain the internet’s shared infrastructure are bearing the cost.
In January 2026, the cURL project terminated its seven-year bug bounty program after AI-generated security reports overwhelmed its maintainers. In February, Godot game engine contributors described AI pull requests as “draining and demoralizing.” GitHub is now weighing new controls to let maintainers disable or gate pull requests entirely. The scale and speed of the problem has forced communities to build new institutions from scratch.
The cURL Tipping Point
For years, the cURL project—the ubiquitous command-line tool used to transfer data across the internet and embedded in billions of devices—ran a well-regarded bug bounty program through HackerOne. In nearly seven years, it confirmed 87 vulnerabilities and paid out more than $100,000 in rewards.
By late 2025, however, the signal-to-noise ratio had collapsed. As The Register reported, the program’s vulnerability confirmation rate fell below 5 percent—down from over 15 percent in previous years. In the first 21 days of 2026, cURL received 20 submissions; seven arrived within a single 16-hour window, and none described actual vulnerabilities.
Lead maintainer Daniel Stenberg shuttered the program effective the end of January. “We seem to have data that confirms that the #curl bug-bounty has received a steep increased submission rate through 2025,” Stenberg wrote, noting the submissions appeared to be AI-generated. By removing monetary rewards, Stenberg hoped to eliminate the financial incentive driving the flood. Security researchers are now directed to GitHub’s private vulnerability reporting feature instead.
The cURL episode is not an isolated case. It is the most publicized example of a pattern that has quietly degraded open source security infrastructure across the ecosystem.
Godot’s Burnout Problem
The issue extends well beyond security reports. Game engine Godot’s maintainers have been among the most vocal about the toll that AI-generated pull requests are taking on human contributors.
Rémi Verschelde, a Godot maintainer, described AI-generated PRs as “increasingly draining and demoralizing” in comments documented by The Register. Game designer Adriaan de Jongh characterized LLM-generated submissions as a “massive time waster for reviewers,” noting that “changes often make no sense” and “descriptions are extremely verbose” while users often do not understand their own proposed changes.
The Coolify Anti Slop GitHub Action—a third-party tool built specifically to detect AI-generated PRs—claims it could have closed 98 percent of slop pull requests submitted to projects that have adopted it. That figure, if accurate, suggests the true scale of the problem: maintainers reviewing pull requests are spending the vast majority of their time on contributions that should never have been submitted.
Verschelde called for more funding to hire maintainers who can deal with the volume. Without that, the human cost compounds: every hour spent rejecting a low-effort AI-generated PR is an hour not spent reviewing a legitimate contribution, fixing a real bug, or writing documentation.
A Structural Problem, Not Just a Volume Problem
A growing body of analysis argues that the damage extends beyond maintainer burnout. As The Register reported in January, researchers at Tailwind Labs found that documentation traffic had fallen roughly 40 percent since early 2023—even as the framework grew in popularity. CEO Adam Wathan attributed the decline to developers using AI assistants that bypass documentation entirely, leading the company to lay off three employees whose roles depended on that traffic.
The dynamic is structural: when AI tools answer questions that developers previously sought out from maintainers, those maintainers lose visibility, reputation signals, and the engagement that can convert into funding or employment. For major projects with corporate backing, this is painful but survivable. For the long tail of critical open source infrastructure maintained by volunteers, it threatens project viability.
The Open Source Response: Trust Networks and Platform Controls
Communities are not waiting for platform intervention. Mitchell Hashimoto—creator of Vagrant, Terraform, and the Ghostty terminal emulator—launched Vouch, a web-of-trust system designed to gate open source contributions, as TechCrunch reported. Under Vouch, contributors must be explicitly vouched for by existing trusted participants before their submissions are accepted. The system supports denouncement of bad actors and allows trust lists to be aggregated across projects, so a contributor trusted by one project in a network can carry that reputation to others.
Hashimoto is testing Vouch on Ghostty, framing it as a return to the social norms that governed open source before AI removed the natural friction from contribution. The core idea is simple: contributors should have to introduce themselves, describe their intentions, and build reputation before their code is reviewed at scale.
At the platform level, GitHub is developing new controls to give maintainers more authority over who can open pull requests and how. According to The Register, the company is working on features that would allow maintainers to limit pull requests to collaborators only, disable them entirely, or delete them directly from the interface—bypassing the current requirement to close-without-merging. GitHub Product Manager Matthew Isabel acknowledged that the real issue is volume rather than AI authorship per se, noting that “a bad or off-topic PR is a bad PR, regardless of where it came from.”
Multiple projects have moved to formalize AI contribution policies. Blender, Fedora, Firefox, the Linux Foundation, Servo, and LLVM have all proposed or adopted policies governing AI-assisted submissions. The pressure extends beyond contributor behavior: Gentoo Linux has begun a gradual migration from GitHub to Codeberg over a distinct but related AI concern—as The Register reported, Gentoo’s move is driven by GitHub’s “continuous attempts to force Copilot usage for our repositories,” not by the AI-generated PR problem. Gentoo currently maintains a presence on both platforms as the migration proceeds.
What We Don’t Know
The long-term trajectory is uncertain. If AI coding tools continue to improve, the gap between legitimate and low-quality contributions may narrow—or widen as bad actors find new ways to automate plausible-seeming submissions. GitHub’s platform controls have not yet shipped, and their effectiveness will depend on how granularly they can be configured.
Whether decentralized trust networks like Vouch can achieve the adoption needed to be meaningful is also unresolved. A web of trust only works if the web is large enough. For small projects with few maintainers, even a modest increase in AI-generated submissions can be destabilizing.
What is clear is that open source communities built their norms and tools in an era when submitting a pull request required genuine effort. That assumption no longer holds, and the institutional response is still catching up.