Building AI-Powered GitHub Issue Triage with the Copilot SDK

The GitHub Copilot SDK lets you embed Copilot's AI into custom apps. This tutorial walks through building IssueCrush, a swipeable issue triage tool, and covers server-side architecture, prompt engineering, and graceful degradation patterns.

Building AI-Powered GitHub Issue Triage with the Copilot SDK

TL;DR

  • The GitHub Copilot SDK lets you embed Copilot's AI into custom apps — this tutorial shows how to build an issue triage tool called IssueCrush
  • Server-side integration is mandatory: the SDK requires Node.js and the Copilot CLI binary, so React Native apps need a backend proxy
  • Structured prompts with issue metadata (labels, author, state) produce better summaries than dumping raw text
  • Always implement graceful degradation — AI services fail, and your app should still work when they do

The Big Picture

If you maintain open source projects or work on active repositories, you know the notification dread. Forty-seven issues. Some are bugs, some are feature requests, some should be discussions, and some are duplicates from three years ago.

The mental overhead is real. Each issue requires context-switching: read the title, scan the description, check labels, assess priority, decide what to do. Multiply that by dozens of issues across multiple repos, and suddenly your brain is mush.

The GitHub Copilot SDK offers a way out. It's the same AI that powers Copilot Chat, but you can embed it in your own applications. Andrea Griffiths built IssueCrush to test this in practice — a swipeable card interface for GitHub issues where Copilot reads each one and tells you what it's about and what to do with it.

This isn't a toy demo. The architecture patterns here apply to any developer tool that needs AI assistance: code review bots, documentation generators, test case writers, or deployment validators. The SDK gives you direct access to Copilot's language models without building your own LLM infrastructure.

How It Works

The first technical decision is where to run the SDK. React Native apps can't directly use Node.js packages, and the Copilot SDK requires a Node.js runtime. Internally, the SDK manages a local Copilot CLI process and communicates with it over JSON-RPC. Because of this dependency on the CLI binary and a Node environment, the integration must run server-side.

The architecture is straightforward: React Native client talks to a Node.js server over HTTPS. The server runs the Copilot SDK, which manages the CLI process. The CLI connects to GitHub's Copilot service. The client separately handles GitHub OAuth and fetches issue data from the GitHub REST API.

This setup has four advantages. First, a single SDK instance is shared across all clients. You're not spinning up a new connection per mobile client — the server manages one instance for every request. Less overhead, fewer auth handshakes, simpler cleanup.

Second, server-side secrets for Copilot authentication keep credentials secure. Your API tokens never touch the client. They live on the server where they belong, not inside a React Native bundle someone can decompile.

Third, graceful degradation when AI is unavailable means you can still triage issues even if the Copilot service goes down or times out. The app falls back to a basic summary built from issue metadata. AI makes triage faster, but it shouldn't be a single point of failure.

Fourth, logging of requests for debugging and monitoring happens naturally because every prompt and response passes through your server. You can track latency, catch failures, and debug prompt issues without bolting instrumentation onto the mobile client.

Before you build something like this, you need three things: the Copilot CLI installed on your server, a GitHub Copilot subscription or a BYOK configuration with your own API keys, and the Copilot CLI authenticated. Run copilot auth on your server, or set a COPILOT_GITHUB_TOKEN environment variable.

The SDK uses a session-based model. You start a client (which spawns the CLI process), create a session, send messages, then clean up. The lifecycle is strict: start() → createSession() → sendAndWait() → disconnect() → stop().

Failing to clean up sessions leaks resources. Griffiths spent two hours debugging memory issues before realizing she'd forgotten a disconnect() call. Wrap every session interaction in try/finally. The .catch(() => {}) on cleanup calls prevents cleanup errors from masking the original error.

Prompt structure matters more than prompt length. Feeding the model organized metadata like title, labels, and author produces much better summaries than dumping the entire issue body as raw text. The prompt includes issue details (title, number, repository, state, labels, created date, author) and the issue body, then asks for a concise 2-3 sentence summary that explains what the issue is about, identifies the key problem or request, and suggests a recommended action.

The labels and author context matter more than you'd think. An issue from a first-time contributor needs different handling than one from a core maintainer, and the AI uses this information to adjust its summary.

The sendAndWait() method returns the assistant's response once the session goes idle. Always validate that the response chain exists before accessing nested properties. The second argument to sendAndWait() is a timeout in milliseconds. Set it high enough for complex issues but low enough that users aren't staring at a spinner.

What This Changes For Developers

The SDK opens real possibilities for building intelligent developer tools. Combined with React Native's cross-platform reach, you can bring AI-powered workflows to mobile in a way that feels native and fast.

Triage is one of those invisible tasks that burns people out. Nobody thanks you for it, and it piles up fast. If you can cut the time it takes to process 50 issues in half, that's time back for code review, mentoring, or just not dreading your notification badge.

The bigger pattern here applies beyond issue triage. Any workflow that involves reading, summarizing, and making decisions on structured data is a candidate for this architecture. Code review comments, pull request descriptions, documentation updates, test failure analysis — all of these follow the same pattern: fetch data from an API, send it to Copilot with a structured prompt, return actionable results.

The server-side pattern also solves a common problem with AI integrations: how do you keep credentials secure while still giving users a responsive experience? By proxying through your own backend, you control authentication, rate limiting, and error handling. The client stays simple and fast.

Graceful degradation is critical. AI services go down. Rate limits happen. Design for it from day one. In IssueCrush, subscription errors return a 403 so the client can show a clear message. Everything else falls back to a summary built from issue metadata: title, labels, and the first sentence of the body. Not as good as AI, but still useful.

The fallback also keeps API costs down. Summaries are generated on-demand, not preemptively. This avoids wasted calls when users swipe past an issue without reading it. Once you have a summary, store it on the issue object. If the user swipes away and comes back, the cached version renders instantly. No second API call, no wasted money, no extra latency.

For teams already using GitHub Copilot, the SDK is a natural extension. You're already paying for the subscription. Now you can use that same AI in custom tooling that fits your workflow instead of adapting your workflow to fit the tools.

Try It Yourself

The core SDK integration is about 50 lines of code. Here's the essential pattern:

const { CopilotClient, approveAll } = await import('@github/copilot-sdk');

let client = null;
let session = null;

try {
  // 1. Initialize the client (spawns Copilot CLI in server mode)
  client = new CopilotClient();
  await client.start();

  // 2. Create a session with your preferred model
  session = await client.createSession({
    model: 'gpt-4.1',
    onPermissionRequest: approveAll,
  });

  // 3. Send your prompt and wait for response
  const response = await session.sendAndWait({ prompt }, 30000);

  // 4. Extract the content
  if (response && response.data && response.data.content) {
    const summary = response.data.content;
    // Use the summary...
  }

} finally {
  // 5. Always clean up
  if (session) await session.disconnect().catch(() => {});
  if (client) await client.stop().catch(() => {});
}

On the React Native side, wrap the API calls in a service class that handles initialization and error states. The UI is straightforward React state management: tap the button, call the service, cache the result.

The server exposes a /health endpoint that signals AI availability. Clients check it on startup and hide the summary button entirely if the backend can't support it. No broken buttons.

The SDK is loaded with await import('@github/copilot-sdk') instead of a top-level require. This lets the server start even if the SDK has issues, which makes deployment and debugging smoother.

Dependencies are minimal: @github/copilot-sdk version 0.1.14 or later, and Express for the server. The SDK communicates with the Copilot CLI process via JSON-RPC. You need the Copilot CLI installed and available in your PATH.

The Bottom Line

Use the Copilot SDK if you're building developer tools that need AI assistance and you already have a Copilot subscription. The server-side pattern is the right architecture for mobile apps or any client that can't run Node.js directly. Skip it if you're building a simple CLI tool — just use the Copilot CLI directly instead of wrapping it in the SDK.

The real risk is treating AI as infallible. Always implement graceful degradation. Your app should still work when the AI service is down, rate-limited, or returning garbage. The real opportunity is making maintainership sustainable. Look at the parts of your workflow that drain you and ask if AI can take a first pass.

The source code for IssueCrush is available on GitHub at AndreaGriffiths11/IssueCrush. The Copilot SDK repository has a Getting Started guide that walks you through your first integration in about five lines of code.

Source: GitHub Blog