What AI Coding Tools Are Actually Good For, According to Devs

GitHub's developer advocates share what actually works in AI coding tools based on real usage data: inline suggestions beat chat interfaces, AI should handle tedium while you handle judgment, and protecting flow state matters more than automation.

What AI Coding Tools Are Actually Good For, According to Devs

TL;DR

  • AI coding tools work best when they blend into your existing workflow, not when they demand your attention
  • Chat interfaces have their place, but forcing everything through a chatbox kills flow state
  • The most valuable AI features handle boilerplate and scaffolding while leaving architectural decisions to you
  • If you're learning to code, use AI to explain fundamentals — if you're senior, use it to skip tedious work

The Big Picture

GitHub's developer advocates are tired of the same questions. Does AI actually help? Can you trust it with production code? Is this marketing hype or real productivity?

Fair questions. The AI coding tool market is noisy, and plenty of products feel like they were built for demo videos rather than real work. But GitHub's Cassidy Williams and Visual Studio's Dalia Abo Sheasha sat down for a livestream to cut through the noise with actual developer feedback and usage data.

The answer isn't "AI is magic" or "AI is useless." It's more nuanced: AI coding tools are genuinely useful for specific tasks, but only when they respect how developers actually work. The difference between a helpful AI feature and an annoying one comes down to whether it protects or destroys your flow state.

Flow — that fragile mental state where code and ideas come easily — is what most developers optimize for. Anything that breaks it, even a well-meaning suggestion, is a problem. The best AI tools understand this. The worst ones don't.

How It Works

GitHub's approach to AI tooling centers on one principle: meet developers where they already work. That means the editor, the terminal, and code review. Not a separate dashboard. Not a mandatory chat interface. The places you're already spending your day.

The data backs this up. Developers consistently report that the most valuable AI experiences are contextual suggestions that surface at the right moment. Renaming a variable? The AI suggests a better name. Writing boilerplate? It autocompletes the pattern. These feel like a helpful colleague handing you a snippet, not an interruption demanding attention.

Chat interfaces, by contrast, have a specific use case that's narrower than the hype suggests. They're excellent for on-demand tasks: explaining unfamiliar code, navigating a new framework, scaffolding a template. But forcing all interaction into a chatbox creates a cognitive burden. As Abo Sheasha puts it: "I'm required to switch my attention off my code to a different place where there's a chat where I'm talking in natural language. It's a huge burden on your brain to switch to that."

The architecture of effective AI coding tools reflects this understanding. Inline suggestions use the same mental model as your editor's autocomplete. They appear in context, you accept or reject them with a keystroke, and you move on. No context switch. No breaking flow.

GitHub's telemetry shows that features which interrupt editing — pop-ups that flood the screen, suggestions that fire while you're actively typing, chat panels that demand attention — get disabled by users. The features that stick are the ones that feel invisible until you need them.

This extends to how different developers use the same tools. Senior developers already move fast. They have established workflows and muscle memory. AI tools that try to change their behavior need to prove immediate value or they get turned off. For these developers, the win is skipping repetitive scaffolding and documentation while maintaining full control over architecture and business logic.

Early-career developers and students have different needs. They're still building mental models of how code works. For them, AI explanation features are learning aids. The key distinction: use AI-generated explanations to deepen your understanding, not replace your own analysis. As Abo Sheasha notes, if you're learning syntax and fundamentals, use AI to explain those fundamentals so you build a strong foundation.

The technical implementation matters too. Most AI assistants offer customization: how often they suggest, how aggressive they are, what contexts trigger them. GitHub's testing shows that developers who take a few minutes to tune these settings report significantly higher satisfaction. The defaults won't work for everyone.

Not every AI feature lands well. GitHub has shipped features that failed. Features that interrupt editing in real time, that suggest changes while you're still thinking, that pop up at the wrong moment — these get disabled. The product teams track this through telemetry and direct feedback, then adjust what ships next.

What This Changes For Developers

The practical impact of well-designed AI coding tools shows up in what you stop doing, not what you start doing. You stop writing boilerplate. You stop context-switching to Stack Overflow for syntax you've forgotten. You stop manually writing test scaffolding and documentation stubs.

What you don't stop doing: making architectural decisions. Debugging tricky issues. Understanding your codebase. Reviewing code for security and correctness. The human developer — your insight, judgment, experience — remains the critical ingredient.

This creates a clear division of labor. AI handles tedium. You handle creativity and judgment. When that balance tips too far in either direction, the tool stops being useful. Too much automation and you lose control. Too little and you're just using expensive autocomplete.

The workflow changes are subtle but meaningful. Instead of scaffolding a new API endpoint by hand, you let the AI generate the structure and you fill in the business logic. Instead of writing documentation from scratch, you let the AI draft it and you edit for accuracy and clarity. Instead of manually writing test cases, you generate the boilerplate and focus on edge cases.

For teams using GitHub Copilot CLI, this extends to terminal workflows. Slash commands let you ask questions and get suggestions without leaving the command line. It's the same principle: meet developers where they work.

The learning curve matters too. If you're early in your career, AI tools can accelerate your understanding of patterns and idioms. But there's a trap: using AI as a shortcut instead of a learning aid. The developers who get the most value are the ones who treat AI suggestions as teaching moments, not copy-paste solutions.

There's also AI fatigue to contend with. Every tool now has an AI feature, and most of them are poorly implemented. The good use cases get drowned out by the noise. GitHub's approach is to let the genuinely useful patterns rise to the top through developer feedback and usage data, then double down on those.

Try It Yourself

To get practical value from AI coding tools, follow these guidelines:

  • Understand and review what you accept. Even if a suggestion looks convenient, know exactly what it does. This is especially critical for code affecting security, architecture, or production reliability. GitHub's Security Lab has published research on using AI for security research, but that doesn't mean you should blindly trust AI-generated code in production.
  • Use explain features as learning aids, not shortcuts. These help solidify knowledge, but they don't replace reading documentation or thinking through problems yourself.
  • Tune suggestion frequency and style. Most tools let you control intrusiveness and specificity. Don't stick with defaults that annoy you. Spend ten minutes in settings and find your comfort zone.
  • Give honest feedback early and often. Your frustrations and feature requests genuinely shape product roadmaps. GitHub's teams rely on direct developer feedback to decide what ships next.
  • Start small. Enable AI suggestions for one specific task — maybe autocomplete or documentation generation — and see if it actually helps. Expand from there if it does.

If you're already using GitHub Copilot, experiment with minimizing the chat panel and relying more on inline suggestions. If you're new to AI coding tools, start with on-demand features like code explanations rather than always-on autocomplete.

The Bottom Line

Use AI coding tools if you spend significant time on boilerplate, scaffolding, or documentation and you want that time back. Skip them if you're working on highly specialized domains where AI suggestions are more noise than signal, or if you're learning fundamentals and need to build muscle memory without shortcuts.

The real opportunity here isn't productivity theater. It's reclaiming time spent on tedious work so you can focus on problems that actually require human judgment. The real risk is treating AI suggestions as gospel instead of starting points that need your review and understanding.

GitHub's bet is that AI tools succeed when they empower developers without replacing their judgment. Based on the usage data and developer feedback they're seeing, that bet is paying off — but only for tools that respect flow state and integrate into existing workflows. Everything else is just noise.

Source: GitHub Blog