Codex GPT-5.4 mini: 2x Faster, 30% Cheaper
GPT-5.4 mini is now live in Codex. It's 2x faster, costs 30% of GPT-5.4 per token, and handles codebase exploration and file review efficiently.
TL;DR
- GPT-5.4 mini now available in Codex — 2x faster than GPT-5 mini, 30% of the token cost of GPT-5.4
- Better coding, reasoning, image understanding, and tool use across all Codex interfaces
- Use it for codebase exploration, file review, and subagent work; stick with GPT-5.4 for complex planning
New
- GPT-5.4 mini model — Fast, efficient model for lighter coding tasks that runs 2x faster and uses 70% fewer tokens than GPT-5.4, letting comparable tasks run 3.3x longer before hitting rate limits.
- Multi-interface availability — GPT-5.4 mini is available in the Codex app, CLI, IDE extension, web interface, and API.
How to Switch
- CLI: Start a new thread with
codex --model gpt-5.4-minior use/modelduring a session. - IDE extension: Select GPT-5.4 mini from the model selector in the composer.
- Codex app: Choose GPT-5.4 mini from the model selector in the composer.
When to Use It
- Codebase exploration and large-file review
- Processing supporting documents
- Subagent work and less reasoning-intensive tasks
- Reserve GPT-5.4 for complex planning, coordination, and final judgment calls
Update your CLI, IDE extension, or Codex app to the latest version if GPT-5.4 mini doesn't appear yet.
Source: Codex