Kiro Adds GLM-5 Model Support

GLM-5, a sparse mixture-of-experts model with 200K context, is now available in Kiro for handling repository-scale codebases and complex agentic tasks.

Kiro Adds GLM-5 Model Support

TL;DR

  • GLM-5 now available in Kiro IDE and CLI with 200K context window
  • Sparse mixture-of-experts model optimized for multi-file codebases and agentic tasks
  • Runs in us-east-1 with 0.5x credit multiplier; restart IDE/CLI to access

New

  • GLM-5 model support — Sparse mixture-of-experts model with 200K context window for repository-scale context and multi-step tool use across large codebases.
  • Experimental availability — GLM-5 accessible in both Kiro IDE and Kiro CLI via model selector after restart.
  • Optimized for complex tasks — Designed for cross-file migrations, full-stack feature development, and legacy refactoring where coherence across large codebases matters.
  • Cost-efficient inference — Runs in us-east-1 (N. Virginia) with 0.5x credit multiplier.

Restart your IDE or CLI to access GLM-5 from the model selector.

Source: Kiro