Speed Is Nothing Without Control: Keeping Quality High in AI Era
AI coding tools make you faster, but speed without quality just compounds technical debt. GitHub Code Quality uses CodeQL and LLMs to catch maintainability issues in pull requests with one-click fixes. Here's how to move fast without breaking things.
TL;DR
- AI coding tools make you faster, but speed without quality just compounds technical debt
- GitHub Code Quality (public preview) uses CodeQL + LLMs to catch maintainability issues in pull requests with one-click fixes
- Better prompting means setting goals and constraints, not just asking for code
- Documentation matters more than ever—show your thinking, not just your output
The Big Picture
We're all moving faster with AI. Features that took days now take hours. Boilerplate that ate your morning now generates in seconds. But here's the problem nobody wants to talk about: a lot of that code is garbage.
Not broken garbage. Worse. The kind that compiles, passes a quick glance, and ships—only to reveal itself as a maintenance nightmare three months later when you're hunting down a production bug at 2am. Unused variables. Duplicated functions. Logic that works until it doesn't.
GitHub calls this "AI slop," and it's the inevitable result of treating AI like a code vending machine instead of a tool that needs direction. At GitHub Universe 2025, VP of Product Marcelo Oliveira put it plainly: "The best drivers aren't the ones who simply go the fastest, but the ones who stay smooth and in control at high speed."
The teams winning right now aren't just fast. They're fast and precise. They've figured out how to use AI without letting it turn their codebase into a junkyard. Here's how they do it.
How It Works
GitHub Code Quality: The Guardrail You Actually Need
GitHub Code Quality is now in public preview, and it's built to solve exactly this problem. It's a combination of CodeQL static analysis and LLM-based detection that runs automatically on your repositories and flags issues in pull requests before they merge.
Here's what makes it different from traditional linters: it understands context. It's not just catching syntax errors or style violations. It's identifying maintainability risks, reliability problems, and technical debt patterns that would normally require a senior engineer to spot during code review.
The workflow is dead simple. Enable it at the repository level. Open a pull request. GitHub Code Quality analyzes the diff and surfaces issues inline—unused variables, duplicated logic, potential runtime errors. Then it offers a one-click fix.
Take this example from GitHub's own demo. A developer writes a fuel calculator function that works but has obvious problems:
// fuelCalculator.js
export function calculateFuelUsage(laps, fuelPerLap) {
const lastLap = laps[laps.length - 1]; // unused variable
function totalFuel(laps, fuelPerLap) {
return laps.length * fuelPerLap;
}
// duplicated function
function totalFuel(laps, fuelPerLap) {
return laps.length * fuelPerLap;
}
return totalFuel(laps, fuelPerLap);
}GitHub Code Quality catches the unused variable, the duplicated function declaration, and the missing input validation. It suggests a cleaned-up version:
export function calculateFuelUsage(laps, fuelPerLap) {
if (!Array.isArray(laps) || typeof fuelPerLap !== "number") {
throw new Error("Invalid input");
}
return laps.length * fuelPerLap;
}No back-and-forth in code review. No "we'll fix it later" that never happens. Just clean code that ships.
The tool also includes an AI Findings page that highlights technical debt in files your team is actively working on. This is smart. Instead of dumping a 10,000-line report of every issue in your codebase, it surfaces problems when they're already in context. You're touching that file anyway—might as well fix the debt while you're there.
And if you want to enforce standards, GitHub Code Quality integrates with Rulesets. You can block merges that don't meet your quality bar. No exceptions, no "just this once." The bar stays consistent without burning out your reviewers.
Prompting Like You Mean It
AI tools are only as good as the instructions you give them. Vague prompts get vague results. If you type "refactor this file," you'll get... something. Maybe it's better. Maybe it's just different. Maybe it breaks prod.
GitHub's recommended prompting framework treats AI like a junior engineer who needs clear direction:
Set the goal, not just the action. Instead of "refactor this file," try "refactor this file to improve readability and maintainability while preserving functionality, no breaking changes allowed." You're defining success criteria, not just issuing a command.
Establish constraints. "No third-party dependencies." "Must be backwards compatible with v1.7." "Follow existing naming patterns." These boundaries keep AI from going rogue and introducing dependencies you don't want or breaking changes you can't afford.
Provide reference context. Link to related files, existing tests, architectural decision records. The more context AI has, the better it can match your team's patterns and standards.
Decide the output format. Do you want a pull request? A diff? A code block with commentary? Be explicit.
With GitHub Copilot's coding agent, you can assign multi-step tasks with all of these elements baked in:
Create a new helper function for formatting currency across the app.
- Must handle USD and EUR
- Round up to two decimals
- Add three unit tests
- Do not modify existing price parser
- Return as a pull requestNotice the division of labor. You're responsible for the thinking—what needs to happen, why, and under what constraints. The agent is responsible for the doing—writing the code, running the tests, opening the PR. This is the right way to use AI. You stay in control. The agent stays in its lane.
Documentation as a Signal of Quality
As AI handles more execution work, the differentiator for developers shifts. It's no longer just about writing code. It's about communicating decisions, trade-offs, and reasoning. Your code shows what you did. Your documentation shows why it matters.
GitHub recommends a simple workflow to make your thinking visible:
Start with an issue. Capture the problem, what success looks like, constraints, and risks. This becomes the anchor for everything that follows.
Name branches and commits thoughtfully. Use meaningful names that narrate your reasoning, not just your keystrokes. "fix-bug" tells me nothing. "fix-race-condition-in-auth-flow" tells me everything.
Document decisions as you build. When you choose one approach over another, write a short note explaining why. What alternatives did you consider? What trade-offs did you make?
Write pull requests with signal-rich context. Add a "Why," "What changed," and "Trade-offs" section. Include screenshots or test notes if relevant.
Instead of "Added dark mode toggle," write:
- Added dark mode toggle to improve accessibility and user preference support.
- Chose localStorage for persistence to avoid server dependency.
- Kept styling changes scoped to avoid side effects on existing themes.This isn't busywork. It's the artifact that lets your team understand your code six months from now when you're not around to explain it. And in an AI-accelerated workflow, where code gets written faster than ever, this kind of documentation is the only thing keeping your codebase from becoming an archaeological dig site.
What This Changes For Developers
The shift here is subtle but important. AI coding tools have been sold on speed—write code faster, ship features faster, move faster. And that's true. But speed without control is just chaos with a shorter feedback loop.
What GitHub is pushing with Code Quality and better prompting practices is a different model: speed and control as a package deal. You don't have to choose between velocity and quality. You can have both, but only if you build the right guardrails and stay in the driver's seat.
This changes the day-to-day in a few concrete ways:
Code review gets faster and more consistent. When GitHub Code Quality catches the obvious stuff automatically, reviewers can focus on architecture, logic, and design decisions instead of hunting for unused variables and duplicated functions. The quality bar stays high without burning people out.
Technical debt becomes manageable. Instead of accumulating silently until it chokes your velocity, debt gets surfaced in context when you're already working in the relevant files. You fix it incrementally, not in a massive refactor that takes three sprints and breaks everything.
AI becomes a tool you direct, not a black box you hope works. Better prompting means you get predictable, high-quality output instead of rolling the dice every time you ask Copilot to generate code. You stay accountable for the thinking. AI stays accountable for the execution.
This is the model that scales. It's how you keep quality high even as your team grows, your codebase expands, and AI takes on more of the grunt work. And it's how you avoid the trap of moving fast in the short term only to drown in technical debt six months later.
Try It Yourself
If you're already using GitHub Copilot, enabling Code Quality is a one-click operation in your repository settings. Turn it on, open a pull request, and see what it catches. You'll probably be surprised—and maybe a little embarrassed—at how much slips through without it.
For prompting, start small. Next time you ask Copilot to generate code, add one constraint. "No third-party dependencies." "Must be backwards compatible." "Follow existing naming patterns." See how the output changes. Then add another constraint. Build the habit of treating AI like a junior engineer who needs clear direction, not a magic wand.
For documentation, pick one pull request this week and add a "Why" section. Three sentences. What problem does this solve? What alternatives did you consider? What trade-offs did you make? That's it. Do it once, see how it feels, and decide if it's worth doing again.
If you want to see how GitHub is thinking about AI-driven workflows more broadly, check out their work on continuous efficiency and AI agents that optimize code while you sleep. It's the same philosophy—speed and control as a package deal—applied to performance optimization instead of code quality.
The Bottom Line
Use GitHub Code Quality if you're shipping AI-generated code at any kind of scale. It's the difference between moving fast and moving fast without breaking things. The one-click fixes alone will save you hours of code review churn.
Skip it if you're working solo on throwaway prototypes or you have a senior engineer reviewing every line of code manually. But even then, the AI Findings page is worth a look for surfacing technical debt you didn't know existed.
The real risk here isn't AI making you slower. It's AI making you faster in a way that compounds problems instead of solving them. GitHub Code Quality, better prompting, and visible documentation are how you avoid that trap. Speed matters. But only if you can trust what you're shipping.
Source: GitHub Blog