GitHub December 2025: Five Incidents, Copilot Hit Hardest

GitHub's December report: five incidents including a 47% Copilot Code Review failure rate, two-week AI Controls outage, and database schema drift. What broke and what's being fixed.

GitHub December 2025: Five Incidents, Copilot Hit Hardest

TL;DR

  • GitHub had five incidents in December affecting Copilot, Actions, and core services
  • Copilot Code Review saw 47% failure rate; AI Controls and policy updates also went down
  • Root causes: misconfigurations, database schema drift, traffic spikes, and network issues

What Dropped

GitHub published its December 2025 availability report, detailing five separate incidents that degraded service across the platform. The most impactful was Copilot Code Review, which failed on nearly half of pull request review requests for over three hours. Other incidents affected enterprise AI Controls, Copilot policy updates, GitHub Actions runners, and unauthenticated traffic.

The Dev Angle

If you rely on GitHub Copilot for agentic workflows, December was rough. The December 15 incident knocked out Copilot Code Review for 3+ hours, forcing developers to manually re-request reviews. The error message was clear—"Copilot encountered an error"—but the root cause was elevated latency in an internal model dependency that cascaded into request timeouts and queue backlog.

Enterprise customers managing AI agents got hit twice: the December 8 incident prevented viewing agent session activities in the Enterprise AI Controls page for nearly two weeks (November 26–December 8), though audit logs remained accessible. Then on December 18, Copilot policy updates broke entirely due to database schema drift, locking out policy configuration for 1.5 hours.

GitHub Actions users in West US saw intermittent timeouts on December 18 (1.5% of larger/standard runners affected), caused by network packet loss between runners and an edge site. Unauthenticated requests—including release downloads from Actions jobs—also degraded on December 22 due to a traffic spike hitting search endpoints.

Should You Care?

If you use Copilot Code Review: Yes. A 47% failure rate is significant. GitHub's mitigation involved bypassing fix suggestions to reduce latency and increasing worker capacity, but the real fix was a model configuration change. They've now increased baseline capacity and improved alerting, so recurrence should be less likely.

If you manage GitHub Enterprise with AI agents: The two-week AI Controls outage is worth noting. It didn't break agent functionality—audit logs and direct navigation still worked—but visibility into session activity was gone. GitHub is hardening monitoring for data pipeline dependencies to prevent similar issues.

If you run Actions in West US: The impact was small (0.28% of all jobs), but if you're in that region, you may have seen intermittent failures. GitHub is improving cross-cloud connectivity detection.

If you're on public GitHub: The December 22 traffic spike affected unauthenticated requests, so public API consumers and release downloads may have timed out. Authenticated users were unaffected. GitHub is tightening rate limiters and improving traffic anomaly detection.

None of these incidents represent systemic failures, but they highlight the complexity of running AI-backed services at scale. GitHub's post-incident improvements—better monitoring, schema validation, load-shedding, and alerting—suggest they're taking these seriously.

Source: GitHub Blog