DeepSeek V3.2 and V3.2-Speciale Now in Cline
DeepSeek V3.2 and V3.2-Speciale are now available in Cline. Both models integrate reasoning directly into tool execution for agentic workflows. V3.2 for daily coding; Speciale for hard problems.
TL;DR
- DeepSeek V3.2 and V3.2-Speciale are now available in Cline's provider dropdown
- V3.2 is built for agentic workflows with reasoning integrated into tool execution, not as a separate step
- V3.2 for daily coding; V3.2-Speciale for hard problems. Both cost $0.28/M input, $0.42/M output tokens
What Dropped
DeepSeek released two models optimized for agentic AI workflows, and they're now live in Cline. V3.2 is the balanced daily driver. V3.2-Speciale is the high-compute variant for genuinely difficult problems. Both integrate reasoning directly into tool execution rather than treating it as a separate reasoning phase.
The Dev Angle
The key innovation here is "thinking in tool-use." Previous models reason first, then execute tools. V3.2 reasons while executing tools—maintaining its chain of thought across multiple tool calls within a single execution. For Cline users, this matters because agentic coding is constant tool execution: read files, write code, run commands, check results.
V3.2 retains reasoning traces across tool calls until a new user message arrives. Tool outputs alone don't wipe the context. In a read → edit → run → check cycle, the model keeps its reasoning thread intact instead of re-deriving context on each step. DeepSeek trained both models on 1,800+ synthesized environments and 85,000+ complex agent instructions—24,667 code agent tasks, 50,275 search agent tasks, 5,908 Jupyter interpreter tasks, and 4,417 general agent tasks. They also allocated over 10% of pre-training compute to reinforcement learning, which is unusually high for an open-source model.
Both models share the same pricing and 131K token context window. Input costs $0.28 per million tokens; output is $0.42 per million. Neither supports images, browser use, or prompt caching. The context window is smaller than Claude's 200K or Gemini's 1M+, so large codebases will hit limits faster.
Should You Care?
If you're a Cline user: V3.2 is worth testing for everyday coding tasks. It's cheaper than GPT-5.2 and designed specifically for agentic workflows. V3.2-Speciale is your pick when you're stuck on a genuinely hard problem and want maximum reasoning depth—DeepSeek reports gold-medal performance on the 2025 IMO, IOI, ICPC World Finals, and CMO using this variant.
If you're evaluating Cline providers: This expands your options. Cline now runs on Vercel AI Gateway, giving you flexibility across multiple model providers. DeepSeek's agentic training is a genuine differentiator—the model understands tool-use patterns in ways general-purpose models don't.
If you're on a tight token budget: V3.2 is more token-efficient than V3.2-Speciale. Speciale uses significantly more output tokens to reach its conclusions, so reserve it for problems that actually need it.
If you need image support or prompt caching: Neither model has these yet. That's a real limitation if your workflow depends on vision or aggressive caching strategies.
Both models are available now in the Cline provider dropdown. Select V3.2 for daily use or V3.2-Speciale when you need to throw maximum reasoning at a problem. Share your experiences on Reddit or Discord.
Source: Cline