How AI-Generated Pull Requests Are Breaking Open Source Mentorship

AI makes it easy to generate plausible pull requests without understanding the code. Maintainers are drowning in review work. The "3 Cs" framework helps you decide who deserves your mentorship time.

How AI-Generated Pull Requests Are Breaking Open Source Mentorship

TL;DR

  • AI tools make it trivially easy to generate plausible pull requests without understanding the code, flooding maintainers with review work
  • Traditional signals (clean code, fast turnaround) no longer indicate a contributor invested time learning your codebase
  • The "3 Cs" framework (Comprehension, Context, Continuity) helps maintainers decide who deserves mentorship investment
  • If you maintain open source: require issues before PRs, ask for AI disclosure, and only mentor contributors who come back

The Big Picture

A polished pull request lands. The code looks clean. The formatting is perfect. You spend 45 minutes writing thoughtful feedback, asking clarifying questions, maybe even seeing potential in this new contributor.

Then nothing. Or worse: a follow-up that makes it clear they can't explain the change because an LLM wrote it. You just spent your afternoon debugging someone's ChatGPT session.

This isn't a one-off anymore. Projects like tldraw closed their pull requests entirely. Fastify shut down their HackerOne program after inbound reports became unmanageable. The 2025 Octoverse report shows developers merged 45 million pull requests per month, up 23% year over year. More PRs, same maintainer hours.

The cost to generate code has dropped to near-zero. The cost to review it hasn't budged. And the social systems that made open source work — trust-building, mentorship, knowledge transfer — are breaking under the load.

This is open source's "Eternal September." A constant influx of contributions that look legitimate but lack the context and commitment that used to come with them. The signals have changed. Maintainers need new filters.

How It Works

The problem isn't AI assistance itself. It's that AI has decoupled contribution from comprehension.

Traditional signals used to work because they were expensive. Clean code meant someone spent time learning your style guide. Handling edge cases meant they'd read through similar implementations. Fast turnaround on feedback meant they were invested.

Now GitHub Copilot and similar tools can generate all of that in seconds. The contributor might not understand the trade-offs. They might not know why the code works. They just know it passes CI.

This creates a review crisis. You can't tell from the code quality whether someone is ready to maintain their change. You can't tell if they'll stick around for the second iteration. You can't tell if mentoring them will multiply your impact or just burn your time.

Some maintainers are responding with blunt instruments: closing all external PRs, requiring sponsorship for review time, or abandoning mentorship entirely. That's not sustainable. Mentorship is how open source communities scale. When you mentor someone well, they mentor others. That's the multiplier effect that built everything we rely on.

The math is stark. Broadcasting to 1,000 contributors per year gets you 5,000 contributions in five years. Mentoring two people every six months, who each do the same, gets you 59,049. Lose mentorship and you lose the entire growth model.

But you can't mentor everyone. So you need filters. Not to exclude newcomers, but to identify who's ready for that investment. That's where the 3 Cs come in.

The 3 Cs Framework

Comprehension: Do they understand the problem well enough to propose this change? Some projects now require contributors to open an issue and get approval before submitting code. The comprehension check happens in that conversation, not in the PR review. In-person code sprints and hackathons work well here too — real-time discussion reveals whether someone gets it.

You're not expecting them to understand the whole project. But they should understand their own change. If they're committing code above their comprehension level, you're setting both of you up for failure.

Context: Do they give you what you need to review this well? Did they link the issue? Explain trade-offs? Disclose AI use? Context is about your ability to do your job as a reviewer.

AI disclosure is becoming standard practice. ROOST has a three-principle policy. The Processing Foundation added a checkbox. Fedora landed a lightweight disclosure policy after months of discussion. This isn't about banning AI. It's about calibrating your review. When you know a PR was AI-assisted, you ask different questions — not "does this run?" but "do you understand why this works?"

There's also AGENTS.md, a new convention like robots.txt for AI coding agents. Projects like scikit-learn, Goose, and Processing use it to tell agents: follow our guidelines, check if an issue is assigned, respect our norms. It shifts the burden of gathering context to the contributor and their tools.

Continuity: Do they keep coming back? This is the mentorship filter. Drive-by contributions can be useful, but limit your mentorship investment to people who engage thoughtfully over time.

Scale your mentorship based on continuity. Great first conversation? Make your review a teachable moment. They come back? Pair on something harder. They keep coming back? Invite them to an event or consider commit access.

What This Changes For Developers

If you're a maintainer, the 3 Cs give you permission to close PRs guilt-free. A polished pull request lands without following guidelines? Close it. Protect your time for contributions that show all three Cs.

If someone comes back, engages in issues, submits a second PR and responds thoughtfully to feedback — now you pay attention. That's when you invest. This is how you protect the multiplier effect without burning out.

There's a bias reduction benefit too. When you rely on vibes, you tend to mentor people who look like you or share your cultural context. The 3 Cs give you a rubric instead of gut feelings. That makes your mentorship more equitable.

If you're a contributor, this framework tells you how to stand out. Open an issue before submitting code. Explain your thinking, not just your solution. Disclose if you used AI. And most importantly: come back. Respond to feedback. Submit a second PR. Show continuity.

The contributors who understand this will get mentored. The ones who treat open source like a code vending machine won't. That's not gatekeeping. That's how communities survive.

Try It Yourself

Pick one C to implement this week:

Comprehension: Add a requirement to your CONTRIBUTING.md that contributors must open an issue and get approval before submitting a pull request. Codex and Gemini CLI both recently added this guideline. It filters out drive-by PRs and forces a comprehension check before code review.

Context: Add an AI disclosure checkbox to your pull request template, or create an AGENTS.md file. Processing's version tells agents to follow contribution guidelines and check if issues are already assigned. ROOST's policy has three simple principles. Pick one and adapt it.

Continuity: Track who comes back. Keep a mental or literal list of contributors who respond thoughtfully to feedback and submit follow-up PRs. Those are your mentorship candidates. Everyone else gets a standard review, nothing more.

Start with one. Look for all three when deciding who to mentor. The decision tree is simple: PR lands → follows guidelines? No → close it. Yes → review → they come back? Yes → consider mentorship.

GitHub is also building platform-level solutions. Their product team published an RFC for community feedback. If you maintain a project affected by this, add your voice. Platform changes take time, but they're coming.

The Bottom Line

Use the 3 Cs if you maintain any open source project that accepts external contributions. Skip it if you're a solo maintainer who doesn't mentor or if your project is already closed to outside PRs.

The real risk here isn't AI-generated code. It's losing the mentorship multiplier that made open source scale in the first place. If maintainers burn out trying to review everything, we lose the knowledge transfer that creates the next generation of maintainers. That's an existential threat.

The opportunity is building guardrails that protect human relationships while still welcoming AI-assisted contributions. Comprehension and Context get you reviewed. Continuity gets you mentored. That's the filter that keeps communities healthy.

AI tools aren't going anywhere. The question is whether we adapt our practices fast enough to maintain what actually makes open source work. The 3 Cs are one answer. Implement them before you need them.

Source: GitHub Blog