JetBrains Junie Roadmap: Smart Plans, Model Switching, and Scale

JetBrains reveals its roadmap for Junie: transparent planning, automatic model switching, and scaling to handle tasks with hundreds of files. The goal isn't faster code generation — it's collapsing the time from idea to deployment.

JetBrains Junie Roadmap: Smart Plans, Model Switching, and Scale

TL;DR

  • JetBrains is building Junie around end-to-end development speed — from idea to deployment, not just code generation
  • Plan feature gives you transparent reasoning and stop-continue control; real-time steering mode coming soon
  • Automatic model selection tests multiple LLMs against benchmarks so you don't have to guess which one to use
  • Junie hit 60.8% on SWEBench, one of the highest scores in the industry, up from 20% a year ago

The Big Picture

JetBrains just laid out its vision for Junie, and it's not about generating more code faster. It's about collapsing the time between "I have an idea" and "this is deployed and working." That's a different goal than most AI coding tools are chasing.

The team is explicit: speed isn't lines per second. It's the entire loop — formulating the task, understanding what the agent is doing, reviewing changes, running tests, deploying. If you spend ten minutes deciphering what your AI agent just did, you didn't save time. You lost it.

This roadmap is JetBrains' answer to the black-box problem. They're investing in transparency, control, and customization. The Plan feature shows you reasoning and intermediate steps. Stop-continue mode lets you course-correct mid-task. And the automatic model-switching approach means Junie picks the best LLM for the job without you having to benchmark them yourself.

The other big bet: scaling Junie to handle tasks with hundreds of files and steps, breaking them into parallel subtasks autonomously. Right now it's good for five to ten files. They want it to handle entire features.

How It Works

Junie's architecture is built around workflow stages, not just code completion. Ask mode lets you brainstorm and discuss ideas before committing to implementation. Code mode executes changes. The Plan feature — released recently — gives you a two-column interface: high-level reasoning on one side, specific steps on the other.

If the plan looks wrong, you stop Junie, add guidance in a follow-up, and continue. No need to roll back and start over. Real-time steering mode is coming, which will let you adjust direction while Junie is still running.

Terminal integration is getting serious attention. You can ask Junie to run scripts, read logs, or deploy. The interactive terminal shows you what's happening. Manual approval mode and brave mode control autonomy levels. Brave mode is getting more granular controls so you can tune how aggressive Junie is.

The model-switching strategy is interesting. JetBrains tests multiple models — including new releases — against real coding benchmarks and user feedback. Junie automatically picks the best one for your task. They compare it to driving an automatic transmission: the system shifts gears for you. An advanced mode for manual model selection is coming this year, but the default behavior is "we pick the best model so you don't have to."

Customization happens through .junie/guidelines.md right now. You can set preferences for how conservative or aggressive Junie should be. JetBrains is working on configurable dials for creativity, risk tolerance, and coding style strictness. Some settings will be guideline-based; others will be deterministic, like "always run tests before submitting."

The scaling plan involves breaking large tasks into smaller ones, completing them in parallel, and connecting the results. Right now Junie handles tasks with a few dozen steps. The goal is hundreds of files and steps, with Junie doing the decomposition autonomously.

On benchmarks, Junie went from 20% on SWEBench a year ago to 60.8% today. That's one of the highest scores in the industry. SWEBench measures how well an AI produces correct code for real-world GitHub issues, so it's a proxy for trust.

What This Changes For Developers

If you've used Junie before, the roadmap signals a shift from "AI that writes code" to "AI that manages tasks." The Plan feature is the clearest example. You're not just reviewing diffs anymore. You're reviewing intent, reasoning, and intermediate steps before they execute.

That changes the collaboration model. Instead of playing turn-based code review — where you spend ten minutes figuring out what the agent did, then comment, then wait for the next move — you're steering in real time. Stop, adjust, continue. It's closer to pair programming than code generation.

The automatic model selection removes a decision point. You don't need to know whether Claude 3.5 Sonnet or GPT-4o is better for refactoring versus greenfield code. Junie picks based on internal benchmarks. If you want control, the advanced mode is coming. But the default is "trust the system."

Customization through .junie/guidelines.md is underused. If you're not setting preferences for how Junie behaves, you're getting generic output. The upcoming dials for creativity and risk tolerance will make this more accessible, but the file-based approach works now.

The scaling ambition is the long-term play. If Junie can autonomously break a feature into subtasks, complete them in parallel, and integrate the results, that's a different workflow. You're delegating features, not functions. That's closer to managing a junior dev than using a code completion tool.

Try It Yourself

Junie is available now as part of JetBrains AI Pro and JetBrains AI Ultimate. The Pro tier is for evaluation and occasional use. Ultimate is for regular usage without token counting or run limits.

To get started, install Junie in your JetBrains IDE and create a .junie/guidelines.md file in your project root. Add preferences like:

# Junie Guidelines

- Always write tests before submitting code
- Prefer functional programming patterns in JavaScript
- Be conservative with refactoring — only change what's necessary
- Run linters and formatters before marking tasks complete

Then open the Junie panel, switch to Ask mode, and describe a task. Watch the Plan feature show you reasoning and steps. Stop mid-execution if the direction looks wrong, add a follow-up with guidance, and continue.

If you want deeper context on how AI coding agents work under the hood, read this breakdown of agent architecture.

The Bottom Line

Use Junie if you want transparency and control over what your AI agent is doing. The Plan feature and stop-continue workflow are the clearest implementation of "show your work" in any coding agent right now. Skip it if you prefer a hands-off approach where the AI just makes changes and you review diffs afterward.

The real opportunity here is the automatic model selection. You're not benchmarking LLMs yourself or second-guessing whether you picked the right one. JetBrains is doing that work and switching models behind the scenes. That's valuable if you don't want to become an AI model expert just to write code.

The risk is the scaling ambition. Breaking tasks into parallel subtasks autonomously is hard. If Junie can't decompose problems correctly, you'll spend more time debugging the task breakdown than you would have just doing it yourself. But the 60.8% SWEBench score suggests the underlying code quality is there. The question is whether the orchestration layer can keep up.

Source: Junie