Five Worst Instructions for AI Coding Agents

Expert developers waste tokens and revert code constantly. Not because they lack skill—because they treat AI agents like compilers. Cline breaks down five patterns that kill productivity and how to fix them.

Five Worst Instructions for AI Coding Agents

TL;DR

  • Cline breaks down five common prompting mistakes that waste tokens and tank code quality
  • The core issue: developers treat AI agents like compilers, not collaborators who need context
  • Shift from vague commands ("make it better") to specific reasoning ("extract validation logic because error handling is unclear")

What Dropped

Cline published a detailed guide on the five worst instructions developers give AI coding agents. The post walks through real patterns that waste API tokens, degrade code quality, and frustrate experienced developers—including a story about someone burning $200 in a single afternoon on derailed conversations.

The Dev Angle

The core insight: decades of compiler-based thinking have trained us to be terse and minimal. "Refactor this function." "Fix this bug." "Make it better." That works for compilers. It fails for AI agents, which need context, reasoning, and explicit priorities to make good decisions.

The five patterns Cline identifies are:

  • Vague objectives — "Make it better" forces the AI to guess. Instead, be specific: "Extract validation logic into a separate function because error handling is unclear."
  • Contradictory constraints — "Add OAuth but keep it simple" creates confusion. Prioritize explicitly: "I care more about security than implementation simplicity."
  • Course-correcting derailed conversations — Research shows LLMs drop 39% in performance when instructions arrive across multiple turns. Use checkpoints to rewind and rewrite the original prompt with missing context instead of adding corrections.
  • Incomplete bug reports — "Fix this bug" doesn't work. Share your debugging history: what you've tried, what didn't work, where you suspect the problem. Let Cline read relevant files with @filename syntax.
  • Compiler-style terseness — Minimal commands worked for tools with strict rules. AI agents need your reasoning. Explain the "why" behind the "what."

The post emphasizes that Cline's Plan and Act modes are built for this shift. Plan mode lets you negotiate tradeoffs and clarify requirements before generating code. Checkpoints let you rewind conversations to before they went wrong. File exploration with @ syntax gives the AI complete context without forcing you to paste everything into chat.

Should You Care?

If you're an experienced developer struggling with AI agents—reverting code frequently, burning tokens on derailed conversations, or getting results that feel "almost right"—this directly addresses your workflow. The patterns Cline describes are habits, not intelligence gaps. They're learnable.

If you're already getting good results from AI agents, this is a framework for understanding why and teaching others. If you're new to AI-assisted coding, this saves you from developing bad habits in the first place.

The practical payoff: fewer reverts, lower token spend, better code quality, and faster iteration. The mindset shift is the real value—treating AI agents as collaborators who need context rather than compilers who need commands.

Source: Cline