Master Copilot Code Review: How to Write Instructions That Work
GitHub's guide to writing Copilot code review instructions that actually work. Learn what to include, what to avoid, and how to structure files for consistent automated reviews.
TL;DR
- Copilot code review now supports both repo-wide
copilot-instructions.mdand path-specific*.instructions.mdfiles for customization - Keep instructions under 1,000 lines, use bullet points over paragraphs, and show code examples
- Don't try to modify comment formatting, change the PR overview, or include external links — Copilot won't follow them
- Essential for teams that want consistent, automated code reviews without fighting the tool
The Big Picture
GitHub's Copilot code review can automate your team's code review process, but only if you know how to talk to it. The feature recently added support for custom instructions files — both a centralized copilot-instructions.md and path-specific *.instructions.md files. This gives you control over how Copilot reviews your code.
The problem? Most teams don't know how to write instructions that actually work. They write novels when Copilot wants bullet points. They ask it to change UI formatting when it can only review code. They link to external docs that Copilot will never read.
GitHub reviewed hundreds of instructions files and distilled what works. This isn't theory — it's based on real usage patterns and common failures. If you're already using Copilot code review and wondering why it ignores your carefully crafted guidelines, this is why.
The core insight: Copilot code review is non-deterministic and has hard limitations. You can't wish those away with clever prompting. But you can work within them to get consistent, useful reviews.
How It Works
Copilot code review reads two types of instruction files from your repository. The first is copilot-instructions.md in your .github directory — this applies to your entire repo. The second is any NAME.instructions.md file in .github/instructions with an applyTo frontmatter property that targets specific paths or file types.
The distinction matters. Use copilot-instructions.md for team-wide standards like "Flag use of deprecated libraries across the codebase." Use path-specific files for language rules, framework conventions, or subsystem-specific guidelines. You can target files with glob patterns: applyTo: **/*.py for all Python files, or applyTo: documentation/*.md for docs only.
Path-specific files also support an excludeAgent frontmatter property. This lets you write instructions meant only for Copilot code review or only for Copilot coding agent. If you're already using agents.md files, this separation prevents cross-contamination between different Copilot features.
The instructions themselves need structure. Copilot processes headings and bullet points better than prose. Short, imperative rules beat long explanations. "Use camelCase for variable names" works. "We've found that camelCase naming conventions tend to improve readability in our codebase" doesn't.
Code examples are critical. Show good and bad patterns side-by-side, just like you would in a human code review. Copilot learns from examples faster than from abstract rules. A TypeScript snippet showing correct interface naming teaches more than three paragraphs about PascalCase conventions.
Length matters more than you'd think. Instructions files over 1,000 lines lead to inconsistent behavior. The LLM gets confused. It starts ignoring rules or applying them randomly. If your file is that long, split it into multiple path-specific files organized by topic — security, testing, style, framework rules.
What doesn't work: external links. Copilot won't follow them. Copy the relevant content into your instructions instead. Vague directives like "be more accurate" or "identify all issues" add noise — Copilot is already tuned for this. Requests to change comment formatting, modify the PR overview, or alter product behavior outside code review are ignored entirely.
What This Changes For Developers
Before custom instructions, Copilot code review was a black box. It caught some issues, missed others, and you couldn't predict which. Teams either accepted its default behavior or didn't use it at all.
Now you can encode your team's standards directly. If you use Jest for testing, tell Copilot. If you prefix private variables with underscores, document it. If you have a custom error handling pattern, show an example. Copilot will look for violations during review.
The workflow changes from reactive to proactive. Instead of catching style violations in human review, Copilot flags them first. Your senior engineers stop writing the same comments about naming conventions and start focusing on architecture and logic. Junior developers get consistent feedback before their PR reaches human reviewers.
Path-specific instructions solve the polyglot problem. Your Python code follows different conventions than your TypeScript. Your documentation has different standards than your application code. One copilot-instructions.md file can't handle all of that without becoming a mess. Separate python.instructions.md and typescript.instructions.md files keep rules organized and targeted.
The real win is iteration. Start with five rules. See what Copilot catches. Add more. Refine the ones that don't work. Instructions files are code — they belong in version control, they get reviewed in PRs, they evolve with your codebase. This is similar to how GitHub trained Copilot's next-edit model — continuous refinement based on real usage.
Try It Yourself
Here's a minimal typescript.instructions.md file that demonstrates the structure:
---
applyTo: "**/*.ts"
---
# TypeScript Coding Standards
## Naming Conventions
- Use `camelCase` for variables and functions
- Use `PascalCase` for class and interface names
- Prefix private variables with `_`
## Code Style
- Prefer `const` over `let` when variables are not reassigned
- Avoid using `any` type; specify more precise types
- Limit line length to 100 characters
## Error Handling
- Always handle promise rejections with `try/catch` or `.catch()`
- Use custom error classes for application-specific errors
## Example
```typescript
// Good
const fetchUser = async (id: number): Promise<User> => {
try {
// ...fetch logic
} catch (error) {
// handle error
}
};
// Bad
async function FetchUser(Id) {
// ...fetch logic, no error handling
}
```
Create this file in .github/instructions/typescript.instructions.md. Open a PR that modifies TypeScript files. Add Copilot as a reviewer. It will flag violations of these rules.
If you already have instructions files that need cleanup, use Copilot coding agent to refactor them. Navigate to github.com/copilot/agents, select your repo, and paste GitHub's provided prompt that removes unsupported content, adds structure, and splits language-specific rules into separate files. Copilot will create a draft PR with the changes.
The Bottom Line
Use custom instructions if you have team coding standards that aren't obvious from the code itself. Skip it if your team is small, your conventions are standard, or you're still figuring out what your standards should be.
The real risk is over-engineering. A 2,000-line instructions file with every possible rule will perform worse than a 200-line file with your ten most important conventions. Start small. Add rules when you notice Copilot missing things you care about. Remove rules that don't change behavior.
This feature shines for teams with established style guides, framework-specific patterns, or security requirements that need enforcement. It's less useful for greenfield projects or teams that prefer flexibility over consistency. The opportunity is turning institutional knowledge into automated review — but only if you're willing to maintain those instructions like you maintain your code.
Source: GitHub Blog