Building AI-Powered GitHub Issue Triage with the Copilot SDK
GitHub's Copilot SDK lets you embed AI into custom apps. This walkthrough builds IssueCrush, a mobile issue triage tool, and shows the server-side architecture, prompt engineering, and error handling patterns you need to ship a production integration.
TL;DR
- The GitHub Copilot SDK lets you embed Copilot's AI into custom apps — this walkthrough builds a mobile issue triage tool called IssueCrush
- Server-side integration is mandatory: the SDK requires Node.js and the Copilot CLI binary, so React Native apps need a backend proxy
- Structured prompts with issue metadata (labels, author, state) produce better summaries than dumping raw text
- Session lifecycle management is strict — skip cleanup and you'll leak resources. Always use try/finally blocks
- Developers maintaining active repos or open source projects who want to cut triage time in half
The Big Picture
If you maintain open source projects or work on active team repositories, you know the notification dread. Forty-seven issues. Some are bugs. Some are feature requests. Some should be discussions. Some are duplicates from 2022. The mental overhead is real: read the title, scan the description, check labels, assess priority, decide. Multiply that by dozens of issues across multiple repos, and your brain turns to mush.
GitHub's Copilot SDK offers a way out. It's the same AI that powers Copilot Chat, but packaged as a Node.js library you can embed in your own tools. Andrea Griffiths, a developer on GitHub's team, built IssueCrush to test the SDK in practice — a swipeable mobile app that shows GitHub issues as cards. Swipe left to close, right to keep. Tap "Get AI Summary" and Copilot reads the issue and tells you what it's about and what to do with it. Instead of context-switching through every lengthy description, you get instant, actionable summaries.
This isn't a toy demo. The architecture patterns, prompt engineering, and error handling strategies here apply to any developer tool that needs AI assistance. If you've been wondering how to integrate the Copilot SDK into a real application, this is the blueprint.
How It Works
The first technical hurdle: React Native apps can't directly use Node.js packages. The Copilot SDK requires a Node.js runtime because it manages a local Copilot CLI process and communicates with it over JSON-RPC. The CLI binary must be installed and available on the system PATH. This dependency chain means the integration must run server-side, not in the mobile app.
Griffiths settled on a server-side proxy pattern. The React Native client talks to a Node.js backend over HTTPS. The backend runs the Copilot SDK, which spawns and manages the CLI process. The client separately handles GitHub OAuth and fetches issue data via the GitHub REST API. The server only handles AI summarization.
This architecture has four advantages. First, a single SDK instance is shared across all clients. You're not spinning up a new Copilot CLI connection per mobile session. Less overhead, fewer auth handshakes, simpler cleanup. Second, server-side secrets keep Copilot credentials secure. API tokens never touch the client bundle where they could be decompiled. Third, graceful degradation when AI is unavailable. If the Copilot service times out or goes down, the app falls back to a metadata-based summary. AI makes triage faster, but it's not a single point of failure. Fourth, centralized logging. Every prompt and response passes through your server, so you can track latency, catch failures, and debug prompt issues without instrumenting the mobile client.
Before you build something like this, you need three things. Install the Copilot CLI on your server. Get a GitHub Copilot subscription, or configure BYOK (bring your own key) with your own API keys. Authenticate the CLI by running copilot auth on your server, or set a COPILOT_GITHUB_TOKEN environment variable.
The SDK uses a session-based model. You start a client (which spawns the CLI process), create a session, send messages, then clean up. The lifecycle is strict: start() → createSession() → sendAndWait() → disconnect() → stop(). Skip cleanup and you leak resources. Griffiths spent two hours debugging memory issues before realizing she'd forgotten a disconnect() call. Wrap every session interaction in try/finally. The .catch(() => {}) on cleanup calls prevents cleanup errors from masking the original error.
Prompt structure matters more than prompt length. Feeding the model organized metadata — title, labels, author, state — produces much better summaries than dumping the entire issue body as raw text. Griffiths structures her prompts like this: "You are analyzing a GitHub issue to help a developer quickly understand it and decide how to handle it." Then she provides structured fields: title, number, repository, state, labels, created date, author, and body. The instructions are specific: "Provide a concise 2-3 sentence summary that explains what the issue is about, identifies the key problem or request, and suggests a recommended action. Keep it clear, actionable, and helpful for quick triage. No markdown formatting."
The labels and author context matter more than you'd think. An issue from a first-time contributor needs different handling than one from a core maintainer. The AI uses this information to adjust its summary. A bug labeled "critical" from a known contributor gets a different tone than a feature request from someone who just opened their first issue.
Response handling requires defensive coding. The sendAndWait() method returns the assistant's response once the session goes idle. Always validate that the response chain exists before accessing nested properties. The second argument to sendAndWait() is a timeout in milliseconds. Set it high enough for complex issues but low enough that users aren't staring at a spinner. Thirty seconds is a reasonable default.
On the React Native side, Griffiths wraps the API calls in a service class that handles initialization and error states. The service checks backend health on startup, sends issue data to the /api/ai-summary endpoint, and handles three response types: success with a summary, 403 errors when Copilot subscription is missing, and fallback summaries when AI fails.
The UI is straightforward React state management. Tap the button, call the service, cache the result. Once a summary exists on the issue object, the card swaps the button for the summary text. If the user swipes away and comes back, the cached version renders instantly. No second API call, no wasted money, no extra latency.
What This Changes For Developers
AI services can fail. Network issues, rate limits, and service outages happen. The server handles two failure modes. Subscription errors return a 403 so the client can show a clear message: "AI summaries require a GitHub Copilot subscription." Everything else falls back to a summary built from issue metadata. The fallback grabs the title, labels, and the first sentence of the body if it's under 200 characters. It's not as smart as the AI version, but it's better than nothing.
The server exposes a /health endpoint that signals AI availability. Clients check it on startup and hide the summary button entirely if the backend can't support it. No broken buttons. No confusing error states. If AI isn't available, the feature doesn't appear.
Summaries are generated on-demand, not preemptively. This keeps API costs down and avoids wasted calls when users swipe past an issue without reading it. You only pay for what you use. If a user triages 50 issues but only requests summaries for 10, you make 10 API calls, not 50.
The SDK is loaded with await import('@github/copilot-sdk') instead of a top-level require. This lets the server start even if the SDK has issues, which makes deployment and debugging smoother. If the Copilot CLI isn't installed or authentication fails, the server still boots. The health check reports the problem, and the client adapts.
This pattern extends beyond issue triage. The same architecture works for PR review summaries, code explanation tools, documentation generators, or any workflow where you need AI assistance in a mobile or web app. The key insight is that the Copilot SDK doesn't need to live in your client. It can live in a thin backend service that your client calls when it needs AI.
For teams already using GitHub Copilot, this SDK unlocks custom workflows without switching to a different AI provider. You're using the same model, the same authentication, and the same billing. You're just calling it from your own code instead of from the IDE.
Try It Yourself
The core SDK integration is five steps. First, initialize the client, which spawns the Copilot CLI in server mode. Second, create a session with your preferred model (GPT-4.1 is the default). Third, send your prompt and wait for the response. Fourth, extract the content from the response data. Fifth, always clean up by disconnecting the session and stopping the client.
Here's the basic pattern from the IssueCrush server:
const { CopilotClient, approveAll } = await import('@github/copilot-sdk');
let client = null;
let session = null;
try {
// 1. Initialize the client (spawns Copilot CLI in server mode)
client = new CopilotClient();
await client.start();
// 2. Create a session with your preferred model
session = await client.createSession({
model: 'gpt-4.1',
onPermissionRequest: approveAll,
});
// 3. Send your prompt and wait for response
const response = await session.sendAndWait({ prompt }, 30000);
// 4. Extract the content
if (response && response.data && response.data.content) {
const summary = response.data.content;
// Use the summary...
}
} finally {
// 5. Always clean up
if (session) await session.disconnect().catch(() => {});
if (client) await client.stop().catch(() => {});
}
The full IssueCrush source code is available on GitHub at AndreaGriffiths11/IssueCrush. The repo includes the React Native client, the Node.js backend, and the full prompt engineering setup. The Copilot SDK documentation has a getting started guide that walks through your first integration in about five lines of code.
Dependencies are minimal. You need @github/copilot-sdk version 0.1.14 or later and Express 5.2.1 or later for the server. The SDK communicates with the Copilot CLI process via JSON-RPC, so you need the Copilot CLI installed and available in your PATH. Check the SDK's package requirements for the minimum Node.js version.
The Bottom Line
Use this if you maintain active repositories and triage is burning you out. The server-side proxy pattern works for any mobile or web app that needs AI assistance without embedding the full SDK in the client. The architecture scales, the error handling is production-ready, and the cost model is pay-per-use.
Skip this if you're building a simple CLI tool or a Node.js-only app. In those cases, you can use the SDK directly without the proxy layer. The overhead isn't worth it.
The real opportunity here isn't IssueCrush. It's the pattern. The Copilot SDK opens a path to embedding AI into developer workflows that don't fit the IDE model. Mobile apps, web dashboards, Slack bots, CI/CD pipelines — anywhere you have a Node.js runtime and a use case for AI assistance. Triage is one invisible task that burns people out. If you can cut the time it takes to process 50 issues in half, that's time back for code review, mentoring, or just not dreading your notification badge. Look at the parts of maintaining that drain you and ask if AI can take a first pass.
Source: GitHub Blog