GitHub Adds AI-Powered Security Detections for Shell, Terraform, and PHP
GitHub Code Security now uses AI to detect vulnerabilities in Shell, Terraform, Dockerfiles, and PHP — languages CodeQL struggles with. Hybrid model pairs static analysis with AI, surfacing findings in pull requests. Public preview Q2 2025.
TL;DR
- GitHub Code Security now uses AI to detect vulnerabilities in languages CodeQL doesn't cover well — Shell/Bash, Dockerfiles, Terraform (HCL), and PHP
- Hybrid model pairs traditional static analysis (CodeQL) with AI detections, surfacing findings directly in pull requests
- Internal testing showed 80%+ positive developer feedback across 170,000 findings in 30 days
- Public preview coming Q2 2025 — security teams protecting polyglot codebases should pay attention
The Big Picture
CodeQL is excellent at finding vulnerabilities in enterprise languages like Java, C#, and JavaScript. But modern repos don't stop there. You've got Bash scripts automating deployments, Terraform defining infrastructure, Dockerfiles building containers, and legacy PHP still running in production.
Static analysis tools struggle with these ecosystems. The semantics are messy, the patterns are context-dependent, and building comprehensive query libraries takes years. Security teams end up with blind spots — code that ships without the same scrutiny applied to your main application logic.
GitHub's answer is a hybrid detection model. CodeQL continues handling deep semantic analysis for supported languages. AI-powered detections fill the gaps, scanning Shell scripts, Terraform configs, Dockerfiles, and PHP for common vulnerability patterns. Both run automatically on pull requests. Both surface findings in the same interface developers already use for code review.
This isn't replacing static analysis. It's acknowledging that static analysis alone can't keep up with the pace and diversity of modern development. If your security posture depends on catching issues before merge, you need coverage across everything developers commit — not just the languages your SAST tool was built for a decade ago.
How It Works
When you open a pull request, GitHub Code Security analyzes the diff. For languages with mature CodeQL support, it runs traditional static analysis. For ecosystems where static analysis coverage is thin or nonexistent, it runs AI-powered security detections instead.
The AI model looks for vulnerability patterns: unsafe string concatenation in SQL queries or shell commands, weak cryptographic algorithms, infrastructure misconfigurations that expose sensitive resources. These are the same classes of bugs that CodeQL queries target, but detected through pattern recognition rather than semantic analysis.
Results appear inline in the pull request, alongside any CodeQL findings. Developers see the issue, the risk, and a suggested fix — all before the code merges. No separate security dashboard. No post-merge alerts that require context-switching back to a branch from two weeks ago.
GitHub tested this internally over 30 days, processing more than 170,000 findings. Developer feedback was positive in over 80% of cases, which is a strong signal that the detections aren't generating excessive noise. False positives kill adoption faster than anything else in security tooling.
The system is part of what GitHub calls its "agentic detection platform" — a broader architecture that powers security scanning, code quality checks, and code review suggestions. The same infrastructure that runs AI-powered security detections also drives other Copilot-powered workflows. This matters because it means the detection logic can evolve. As new vulnerability patterns emerge, the model can adapt without waiting for someone to write and test a new CodeQL query.
Copilot Autofix ties into this workflow. When a vulnerability is detected, Autofix generates a suggested remediation. Developers review it, test it, and apply it as part of the normal code review process. GitHub reports that Autofix has resolved more than 460,000 security alerts in 2025 so far, with an average resolution time of 0.66 hours compared to 1.29 hours without it.
The enforcement layer sits at merge time. Because GitHub controls the pull request workflow, security teams can block merges based on policy — whether the finding came from CodeQL or AI-powered detections. This is a meaningful advantage over external SAST tools that scan after the fact and rely on developers to go back and fix issues post-merge.
What This Changes For Developers
If you're writing Terraform modules, Dockerfiles, or Bash scripts, you're about to get the same inline security feedback that Java and JavaScript developers have had for years. That's the practical impact.
For security teams, this expands the scope of what you can enforce at the pull request level. You're no longer limited to the languages CodeQL supports deeply. You can catch misconfigurations in infrastructure-as-code before they reach production. You can flag unsafe shell command construction in CI scripts. You can identify weak crypto in legacy PHP without waiting for a penetration test to surface it.
The shift from post-merge scanning to pre-merge enforcement is significant. Post-merge alerts create friction. Developers have moved on. The branch is stale. The context is gone. Fixing the issue requires reopening work that felt finished. Pre-merge findings are just part of code review — another comment to address before you hit merge.
This also changes the calculus for teams evaluating GitHub Advanced Security. If your codebase is heavily polyglot — especially if you're managing infrastructure-as-code alongside application logic — the expanded language coverage makes the platform more defensible as a single security solution. You're not bolting on separate tools for Terraform scanning or Dockerfile linting.
Try It Yourself
Public preview is planned for early Q2 2025. If you're already using GitHub Advanced Security, you'll likely get access automatically once it rolls out. GitHub hasn't published detailed setup instructions yet, but based on how Code Security features typically work, expect this to be enabled at the repository or organization level through security settings.
In the meantime, if you want to see how GitHub's existing security tooling works in practice, the practical guide to GitHub Advanced Security walks through CodeQL setup, secret scanning, and dependency review. The AI-powered detections will slot into the same pull request workflow described there.
GitHub will demo the feature at RSAC 2025 (booth #2327) in early May. If you're attending and want to see the detection quality firsthand, that's your chance to evaluate whether the AI-generated findings meet your bar for signal-to-noise ratio.
The Bottom Line
Use this if you're managing security for polyglot codebases and your current SAST tool leaves gaps in Shell, Terraform, Dockerfiles, or PHP. The hybrid model makes sense — CodeQL where it's strong, AI where static analysis struggles. The pull request integration is the right place to surface findings, and Copilot Autofix addresses the remediation bottleneck that kills most security programs.
Skip it if your codebase is homogeneous and already well-covered by CodeQL, or if you're not using GitHub Advanced Security at all. This is an incremental improvement for teams already invested in the platform, not a reason to migrate if you're happy with your current tooling.
The real risk here is over-reliance on AI detections without validating their accuracy in your specific context. 80% positive feedback in GitHub's internal testing is promising, but your codebase isn't GitHub's codebase. Plan to monitor false positive rates closely during the preview period, and be ready to tune or disable detections that generate too much noise. The opportunity is genuine expanded coverage without adding another tool to your stack — but only if the detection quality holds up at scale.
Source: GitHub Blog