The Tragedy of the Artificial Commons (FOSDEM '26 Edition)
I was in Brussels a few weeks ago for FOSDEM 2026.
It was exactly what you want open source to feel like. Chaotic, entirely human, running on cheap coffee and strong Belgian beer. Thousands of developers packed into freezing university lecture halls, trading stickers, arguing about kernel scheduling, and actually looking each other in the eye.
(Here is a carousel of photos I took so you can get a feel for the physical vibe).
FOSDEM 2026






Being there was a massive reminder of why I have always loved the idea of contributing to open source. It is the software engineer’s ultimate rite of passage. You find a bug, fork the repo, fix it, and merge. Your name gets permanently etched into the commit history of a tool that runs the internet.
But walking those hallways and listening to the conversations between the official talks revealed a quiet, underlying panic. The open-source model is buckling. Stepping into the maintainer arena today feels like walking into a besieged city. The problem goes way beyond buggy code. The entire trust model is collapsing, driven by the exact tools that were supposed to make software engineering frictionless.
Cheap slop, Expensive validation
Daniel Stenberg, the creator of curl, anchored this crisis perfectly during his FOSDEM talk, "Open Source Security in Spite of AI."
curl is foundational internet infrastructure. It runs in your car, your router, your fridge, and your phone. Right now, it is under a relentless, distributed denial-of-service attack of "helpfulness."
Stenberg pointed out a terrifying metric: 30% to 70% of the security reports submitted to curl are now entirely AI-generated slop. Two years ago, this number was zero.
Standard bad human code is easy to spot. It’s messy, it lacks formatting, the logic is visibly flawed. AI slop is insidious. These reports look highly polished. They use perfect English, aggressive politeness, and all the right corporate cybersecurity buzzwords. Fundamentally, though, they lack any real technical grounding.
When maintainers ask follow-up questions to clarify the bug, the human reporter rarely debugs anything. They act as a mindless copy-paste relay between the maintainer and ChatGPT, feeding the error back to the model and pasting whatever hallucination comes out next into the GitHub thread.
The core problem is the asymmetry of effort. AI makes it extremely cheap to generate plausible-sounding security reports. The validation cost, however, is still paid entirely in human time. An LLM takes three seconds to invent a fake memory leak. A tired, underfunded, volunteer maintainer takes thirty minutes to trace the logic, realize it’s a complete hallucination, and close the ticket.
Infinite supply of slop versus finite human bandwidth. The math does not end well. The situation escalated to the point where curl officially shut down its bug bounty program. The financial incentive was just feeding the botnet.
The hidden infrastructure tax
The bleed extends far past pull requests and issue trackers. Stenberg highlighted a structural warning that nobody outside of hardcore DevOps circles is talking about: open-source projects silently absorb the massive infrastructure costs created by the AI boom.
Bot traffic and AI scrapers pillage open-source documentation and repositories to train their models. curl.se currently serves over 4,000 requests per second, pushing 75 terabytes of traffic a month.
Maintainers pay for that server bandwidth. The open-source ecosystem effectively subsidizes the training data for trillion-dollar AI companies, while simultaneously having its maintainer attention span destroyed by the output of those exact same models.
Guilty until proven Human
Exhausted maintainers are closing the gates.
Open source used to operate on a default-trust model—innocent until proven guilty of writing bad code. Today, you are considered a bot until proven human. The old norms of open, permissionless collaboration are facing aggressive rewrites centered around reviewer scarcity and defensive filtering.
Look at the heavy hitters making drastic moves:
-
TLDraw: announced they were automatically closing pull requests from external contributors. An open PR is a social contract where a maintainer promises to review your work. Faced with a queue flooded by AI slop, they simply cannot honor that commitment without burning out their core team.
-
Ghostty: implemented a strict policy requiring contributors to explicitly disclose any AI assistance beyond simple tab-completion. The danger lies in developers generating massive architectural code changes without actually comprehending the state management they've just built.
-
EFF: introduced strict governance on LLM-assisted contributions, explicitly prioritizing high-quality, deeply understood software over simply generating more lines of code in less time.
The golden age of the "drive-by pull request" is dead. To contribute now, you need a proof of pulse. You have to hang out in the Discord, build social capital, write documentation, and prove you understand the architecture before submitting a single line of executable code.
Great at parsing, Terrible at owning
The nuance here is critical. Stenberg is not blindly anti-AI, and neither am I.
In his talk, he admitted that curl successfully fixed over 100 legitimate issues found by AI tools. AI excels at catching things human analyzers miss:
- spec-to-code mismatches
- obscure edge cases
- bizarre multi-library interactions that are too tedious to trace manually
AI serves as a fantastic assistant for static analysis and a great first-pass code reviewer. It fails completely as a substitute for engineering judgment. The tooling makes low-quality behavior fast and scalable. Too many new "contributors" cannot tell the difference between a real finding and a confidently fabricated one.
The rise of the Slopfork
There is a strange, paradoxical flip side to this chaos. While AI destroys the PR queues of legacy projects, it also annihilates the concept of "complexity as a moat" through a new phenomenon: the slopfork.
A slopfork completely bypasses the traditional Git fork. You don't branch the repository, respect the commit history, and try to thoughtfully maintain parity. Instead, you point an LLM at the documentation of a massive, established framework and demand a total rebuild from scratch.
Look at the recent release of Vinext. One single engineer, directing an LLM, rebuilt 94% of the public API surface of Next.js from scratch on top of Vite in under a week. The total cost in API tokens hovered around $1,100.
This represents a massive paradigm shift. For years, corporate frameworks stayed dominant simply because reimplementing their API surface remained too expensive and painful for competitors. That protection has vanished. If a solo developer can cleanly reverse-engineer and slopfork the most popular React framework in the world in seven days, no corporate codebase is safe.
But here is the catch. Nobody maintains a slopfork.
The prompter rarely understands the underlying architecture they just deployed. It functions as disposable infrastructure. When a deep, foundational bug eventually appears in a slopfork, you don't debug it. You burn the repo down and prompt a new one. Software transitions from a crafted asset to single-use plastic.
The New Game
The open-source world is officially bifurcating.
-
Fortress mode: the borders of existing, high-value projects turn into heavily guarded fortresses to survive the AI DDoS attack. Maintainer attention becomes the only strictly scarce resource. Open source transforms into a high-trust network where reputation precedes code.
-
Slopfork mode: the cost to build your own fortress from scratch drops to zero.
If you want to play in the big leagues now, throwing code over the wall expects a parade that will never come. You have to show up as a human. If you refuse to play the social game to bypass the filters, you have an alternative. The exact same tools breaking the commons let you slopfork your own empire in a weekend.
Choose your game.