I was in Brussels a few weeks ago for FOSDEM 2026.

It was exactly what you want open source to feel like. Chaotic, entirely human, running on cheap coffee and strong Belgian beer. Thousands of developers packed into freezing university lecture halls, trading stickers, arguing about kernel scheduling, and actually looking each other in the eye.

(Here is a carousel of photos I took so you can get a feel for the physical vibe).

Being there was a massive reminder of why I have always loved the idea of contributing to open source. It is the software engineer’s ultimate rite of passage. You find a bug, fork the repo, fix it, and merge. Your name gets permanently etched into the commit history of a tool that runs the internet.

But walking those hallways and listening to the conversations between the official talks revealed a quiet, underlying panic. The open-source model is buckling. Stepping into the maintainer arena today feels like walking into a besieged city. The problem goes way beyond buggy code. The entire trust model is collapsing, driven by the exact tools that were supposed to make software engineering frictionless.

Cheap slop, Expensive validation

Daniel Stenberg, the creator of curl, anchored this crisis perfectly during his FOSDEM talk, "Open Source Security in Spite of AI."

curl is foundational internet infrastructure. It runs in your car, your router, your fridge, and your phone. Right now, it is under a relentless, distributed denial-of-service attack of "helpfulness."

Stenberg pointed out a terrifying metric: 30% to 70% of the security reports submitted to curl are now entirely AI-generated slop. Two years ago, this number was zero.

Standard bad human code is easy to spot. It’s messy, it lacks formatting, the logic is visibly flawed. AI slop is insidious. These reports look highly polished. They use perfect English, aggressive politeness, and all the right corporate cybersecurity buzzwords. Fundamentally, though, they lack any real technical grounding.

When maintainers ask follow-up questions to clarify the bug, the human reporter rarely debugs anything. They act as a mindless copy-paste relay between the maintainer and ChatGPT, feeding the error back to the model and pasting whatever hallucination comes out next into the GitHub thread.

The core problem is the asymmetry of effort. AI makes it extremely cheap to generate plausible-sounding security reports. The validation cost, however, is still paid entirely in human time. An LLM takes three seconds to invent a fake memory leak. A tired, underfunded, volunteer maintainer takes thirty minutes to trace the logic, realize it’s a complete hallucination, and close the ticket.

Infinite supply of slop versus finite human bandwidth. The math does not end well. The situation escalated to the point where curl officially shut down its bug bounty program. The financial incentive was just feeding the botnet.

The hidden infrastructure tax

The bleed extends far past pull requests and issue trackers. Stenberg highlighted a structural warning that nobody outside of hardcore DevOps circles is talking about: open-source projects silently absorb the massive infrastructure costs created by the AI boom.

Bot traffic and AI scrapers pillage open-source documentation and repositories to train their models. curl.se currently serves over 4,000 requests per second, pushing 75 terabytes of traffic a month.

Maintainers pay for that server bandwidth. The open-source ecosystem effectively subsidizes the training data for trillion-dollar AI companies, while simultaneously having its maintainer attention span destroyed by the output of those exact same models.

Guilty until proven Human

Exhausted maintainers are closing the gates.

Open source used to operate on a default-trust model innocent until proven guilty of writing bad code. Today, you are considered a bot until proven human. The old norms of open, permissionless collaboration are facing aggressive rewrites centered around reviewer scarcity and defensive filtering.

Look at the heavy hitters making drastic moves:

  • TLDraw: announced they were automatically closing pull requests from external contributors. An open PR is a social contract where a maintainer promises to review your work. Faced with a queue flooded by AI slop, they simply cannot honor that commitment without burning out their core team.

  • Ghostty: implemented a strict policy requiring contributors to explicitly disclose any AI assistance beyond simple tab-completion. The danger lies in developers generating massive architectural code changes without actually comprehending the state management they've just built.

  • EFF: introduced strict governance on LLM-assisted contributions, explicitly prioritizing high-quality, deeply understood software over simply generating more lines of code in less time.

The golden age of the "drive-by pull request" is dead. To contribute now, you need a proof of pulse. You have to hang out in the Discord, build social capital, write documentation, and prove you understand the architecture before submitting a single line of executable code.

Great at parsing, Terrible at owning

The nuance here is critical. Stenberg is not blindly anti-AI, and neither am I.

In his talk, he admitted that curl successfully fixed over 100 legitimate issues found by AI tools. AI excels at catching things human analyzers miss:

  • spec-to-code mismatches
  • obscure edge cases
  • bizarre multi-library interactions that are too tedious to trace manually

AI serves as a fantastic assistant for static analysis and a great first-pass code reviewer. It fails completely as a substitute for engineering judgment. The tooling makes low-quality behavior fast and scalable. Too many new "contributors" cannot tell the difference between a real finding and a confidently fabricated one.

The rise of the Slopfork

There is a flip side to this chaos. While AI destroys the PR queues of legacy projects, it also mutates how we compete, creating a new phenomenon: the slopfork.

A slopfork completely bypasses the traditional Git fork. You don't branch a repository, respect its commit history, and thoughtfully maintain parity. Instead, you use an LLM to generate an alternative implementation of a massive, established public API from scratch.

Look at the recent release of Vinext. One single engineer, directing an LLM, rebuilt about 94% of the Next.js public API surface on top of Vite in under a week. The total cost in API tokens hovered around $1,100.

Let’s be ruthlessly precise about what this actually proves. It does not mean AI can magically clone any corporate codebase. Vinext was uniquely suited to this approach because Next.js is exceptionally well-specified, heavily documented, widely represented in training data, and crucially backed by a massive public test suite. You cannot point an LLM at a messy, undocumented, proprietary banking backend and expect anything but garbage.

Vinext itself is highly experimental. It relies on its test suite as the primary quality gate rather than human code review, and it launched with known bugs early on, an issue was filed where it failed on a basic hello-world Next.js app. It is not a battle-tested, production-grade replacement today.

But the long-term implications are still staggering. For years, frameworks stayed dominant because reimplementing their public API surface was simply too expensive and painful for competitors. That specific protection is dead. Vinext proves that for well-documented frameworks with strong test suites, AI dramatically lowers the cost of building a compatible alternative. "Complexity" is no longer an insurmountable moat.

The looming crisis, however, is maintenance. When an engineer prompts a massive compatibility layer into existence, they don't deeply understand the underlying state management they just deployed. As these API clones hit the long tail of edge cases, debugging them becomes incredibly difficult. We are entering an era where the software lifecycle might shift entirely when a deep bug appears in an AI-generated implementation, the most cost-effective solution might not be to debug it, but to simply burn the architecture down and prompt a new one.

The New Game

The open-source world is officially bifurcating.

On one side, the borders of existing, high-value projects are turning into heavily guarded fortresses to survive the AI DDoS attack. Maintainer attention has become the only strictly scarce resource. Open source is transforming into a high-trust network where reputation precedes code.

On the other side, the cost to build your own fortress from scratch has dropped to zero.

If you want to play in the big leagues now, throwing code over the wall expects a parade that will never come. You have to show up as a human. But if you refuse to play the social game to bypass the filters, you have an alternative: SLOPFORK IT!!!(i mean you still have to burn thousands of euros of token and nobody gonna use it but still)

Choose your game.