GPT-5.4 mini in Codex: 2x Faster, 30% Cheaper

GPT-5.4 mini is now live in Codex. It's 2x faster, uses 70% fewer tokens, and improves over GPT-5 mini on coding and reasoning. Perfect for codebase exploration and subagent work.

GPT-5.4 mini in Codex: 2x Faster, 30% Cheaper

TL;DR

  • GPT-5.4 mini now available in Codex — 2x faster than GPT-5.4, uses 30% of the token budget
  • Better at coding, reasoning, image understanding, and tool use than GPT-5 mini
  • Use it for codebase exploration, file review, and subagent work; stick with GPT-5.4 for complex planning

New

  • GPT-5.4 mini model — Fast, efficient model for lighter coding tasks that runs 2x faster and costs 70% less in token usage, letting comparable tasks run 3.3x longer before hitting limits.
  • Multi-platform availability — Available in Codex app, CLI, IDE extension, web, and API.

How to Switch

  • CLI: Start a new thread with codex --model gpt-5.4-mini or use /model during a session.
  • IDE extension: Select GPT-5.4 mini from the model selector in the composer.
  • Codex app: Choose GPT-5.4 mini from the model selector in the composer.

When to Use It

  • Codebase exploration and large-file review
  • Processing supporting documents
  • Subagent work and less reasoning-intensive tasks
  • Reserve GPT-5.4 for complex planning, coordination, and final judgment calls

Update your CLI, IDE extension, or Codex app to the latest version if GPT-5.4 mini doesn't appear yet.

Source: Codex