Cline v3.41: GPT-5.2, Devstral 2, Faster Model Switching

Cline v3.41 ships GPT-5.2 Thinking (80% on SWE-bench), Devstral 2 (7x cheaper, free launch), and a faster model picker. Test new models and switch between them without friction.

Cline v3.41: GPT-5.2, Devstral 2, Faster Model Switching

TL;DR

  • Cline v3.41 adds GPT-5.2 Thinking, Devstral 2, and a redesigned model picker
  • GPT-5.2 scores 80% on SWE-bench Verified; Devstral 2 is 7x cheaper than Claude Sonnet and free during launch
  • Update now to access faster model switching and Responses API support for smoother multi-step workflows

What Dropped

Cline v3.41 brings three major upgrades: OpenAI's GPT-5.2 Thinking model, Mistral's newly revealed Devstral 2, and a completely redesigned model picker that cuts friction out of switching between providers and modes.

The Dev Angle

GPT-5.2 Thinking is now available in Cline with 80% accuracy on SWE-bench Verified and 55.6% on SWE-Bench Pro. Enable "thinking" mode to unlock extended reasoning for complex, multi-step agentic workflows. The model handles tool calling and long-context reasoning with fewer breakdowns between steps—useful when you're tackling intricate refactoring or architectural decisions.

Devstral 2 (the previously stealth "Microwave" model from Mistral AI) scores 72.2% on SWE-bench Verified while costing up to 7x less than Claude Sonnet. It's free during launch, making it worth testing if cost efficiency matters for your workflow. Select mistralai/devstral-2512 from the provider dropdown to try it.

Model switching is now ergonomic. The picker shows only providers you've configured (those with API keys added), not the full list. Search across all models when you need something specific. Toggle Plan/Act mode with a sparkle icon and enable thinking with one click. Thinking budget adjustments and new provider setup still live in settings.

Codex Responses API support is now live for gpt-5.1-codex and gpt-5.1-codex-max. This newer API handles conversation state server-side and preserves reasoning across tool calls, smoothing out multi-step workflows. Requires Native Tool Calling enabled.

Minor additions: Amazon Nova 2 Lite is available, DeepSeek 3.2 joined the native tool calling allow list, and several bugs were fixed (non-blocking checkpoint commits, Gemini Vertex thinking parameter errors, Ollama streaming cancellation).

Should You Care?

If you're using Cline for complex coding tasks, GPT-5.2 Thinking is worth testing—the reasoning improvements show up in multi-step workflows. If cost is a constraint, Devstral 2's 7x efficiency gain and free launch period make it a no-brainer experiment.

The model picker redesign benefits everyone. Faster switching means less context-switching friction when you're comparing models or toggling between modes. If you're already happy with your current setup, nothing breaks—this is pure UX polish.

The Responses API support is technical but meaningful: if you're running agentic workflows that chain multiple tool calls, the server-side state preservation should reduce failures and improve reasoning continuity.

Source: Cline