Building a Countdown App with GitHub Copilot: Context Windows, Plan Agent, and TDD
Chris Reddington built a countdown app live on stream and discovered practical patterns for working with AI: context window management, Plan agent for requirements, TDD with Copilot, and custom agents. Here's what actually worked.
TL;DR
- Context window management is a skill — start fresh chat sessions when old context becomes noise
- Plan agent reveals edge cases through clarifying questions before you write code
- TDD with Copilot catches bugs AI generates, just like it catches bugs you write
- Custom agents bring specialized expertise (UI performance, security, architecture) to specific tasks
- If you're building with AI coding tools and want cleaner outputs, this workflow matters
The Big Picture
Most developers treat AI chat like a continuous conversation. They keep adding context, piling on requirements, dragging old discussions into new problems. The context window becomes a junk drawer.
Chris Reddington built a New Year countdown app live on GitHub's Rubber Duck Thursdays stream. What started as a simple timer evolved into a timezone-aware, fireworks-laden celebration with a contribution graph theme. But the real story isn't the app — it's the workflow patterns he discovered while building it.
Context window management. Plan agent for requirement discovery. Test-driven development with Copilot. Custom agents for specialized tasks. These aren't theoretical best practices. They're practical techniques that produced working code, caught edge cases, and turned vague ideas into structured implementations.
The stream showed the messy middle of development. A world map that rendered as abstract art. Test failures that caught year rollover bugs. Requirements that evolved based on viewer suggestions. This is what real development with AI tools looks like when you're not cherry-picking the polished moments.
How It Works
Context Window Management: The Junk Drawer Problem
Reddington started with a specific prompt to generate a new workspace. Vite, TypeScript, Tailwind CSS v4. Dark theme. Countdown logic separated from DOM manipulation. GitHub Copilot generated custom instruction files automatically, capturing requirements before writing code.
The initial countdown worked. Days, hours, minutes, seconds ticking down to 2026. But when a viewer suggested timezone support, Reddington made a deliberate choice: he started a new chat session.
The workspace creation context wasn't needed anymore. Anything useful was already in the custom instructions file. Bringing in irrelevant history clutters the conversation and dilutes focus. Fresh context, fresh conversation, sharper results.
This is context engineering, not just prompt engineering. You're managing what the AI sees, not just what you ask for.
Plan Agent: Questions You Forgot to Ask
The timezone requirement was vague. Maybe a spinning globe. Maybe a world map with a time travel theme. Lots of maybes, no clear plan.
Plan agent doesn't create a plan from your initial prompt. It asks clarifying questions that reveal edge cases:
- Should the circular dial be primary with the world map as secondary, or vice versa?
- What happens on mobile: dropdown fallback or touch-friendly scroll?
- When a timezone passes midnight, show "already celebrating" with confetti, or a timer showing how long since midnight?
- Would there be subtle audio feedback when spinning the dial, or visual only?
These questions forced decisions. The celebration, not a reverse countdown. Performance considerations for fireworks (burst once, loop subtly, or continuous). Visual hierarchy for the map versus dial.
The livestream viewers voted. Map won. The plan pivoted to a world map as the primary selector with eight featured locations. The Plan agent output was saved to a separate Markdown file, becoming the source of truth for implementation.
Custom Agents: Specialized Expertise
Reddington reused custom agents from another project. A UI Performance Specialist agent reviewed the plan and suggested implementation details:
- Frame time budgets for animations
- Map SVG size optimization strategies
- Celebration particle limits and cleanup considerations
- Animation property recommendations (transform/opacity only)
- Reduced motion support
Custom agents let you create specialized personas for different development tasks. Security reviews. Architecture planning. Performance optimization. The awesome-copilot repository has examples.
Reddington added two more requirements: make the implementation modular, and write tests first based on expected behavior. Once the tests failed, write the implementation.
Test-driven development with Copilot.
TDD Cycle: Red, Green, Refactor
Copilot created test files for timezone utilities, city state management, and countdown logic. All failing tests. Red state. Good.
Then it implemented:
- Timezone utilities using the Intl.DateTimeFormat API
- City state with featured locations (New York, London, Tokyo, Sydney)
- localStorage persistence for selected timezones
- App state management
With access to tools, the custom agent executed tests in the terminal. Two test cases failed: the logic that determined whether the celebration was being triggered correctly between year rollovers. The tests expected celebrations at midnight and tracked duration since celebrations began.
Copilot caught the test failures, adjusted the timezone implementation, and the tests went green.
This is why TDD matters. AI-assisted development gets things wrong, just like developers do. Tests catch bugs before users do. The year rollover edge case would have been embarrassing to discover on December 31.
The World Map Bug That Became a Feature
When Reddington opened the app, the countdown worked. Timezone selector worked. Calculations were correct. But the world map rendered as abstract art instead of geography.
The prompt was ambitious without enough context. No SVG asset, no reference to an existing mapping library. Just "add a mini world map." A reminder that AI can get things wrong.
Could he have fixed it? Absolutely. But they were over an hour into the stream with more features to build. So he left it. The map was a perfect example of iterative development where things don't always go right the first time.
Fireworks: Building Anticipation
Reddington switched back to Plan agent and created a new chat thread. Context window management again. The plan:
- Use Fireworks.js for effects
- More than 24 hours left: ambient stars, no fireworks
- 24 to 12 hours remaining: fireworks every 30 seconds
- One hour to 10 minutes: intensity builds
- Last 10 seconds: continuous fireworks for maximum celebration
Plus a skyline silhouette, dark night sky gradient, and a critical testing requirement: a query parameter to override time for manual testing. Nobody wanted to wait until 2026 to see if it worked.
Plan agent asked for clarification on star display (CSS or low-intensity fireworks) and performance considerations. It also asked about toggle placement. Reddington didn't remember requesting a toggle button.
After reviewing the plan, he realized the Plan agent had caught an earlier requirement: an animation toggle for accessibility. This is rubber ducking with AI that has context and can check whether requirements still make sense.
Using TDD again, one test failed initially — JSDOM environment setup was missing. Copilot spotted the failure, identified the misconfigured testing configuration, and made the fix. All tests went green.
The app now had fireworks at different intensity levels, an animated starfield using CSS, a city skyline, reduced motion support, and a query parameter override.
What This Changes For Developers
This workflow isn't about replacing developers. It's about structuring how you work with AI tools to get better results.
Context window management means treating chat sessions like functions. They have inputs (requirements, files, context) and outputs (code, plans, tests). When the function is done, you don't drag its local variables into the next function. You start fresh.
Plan agent changes requirement gathering. Instead of writing a spec alone, you collaborate with an AI that asks questions you forgot. It's not perfect — sometimes the answer to A or B is "somewhere in the middle" — but it surfaces edge cases early.
Custom agents bring specialization. Your UI Performance Specialist has different expertise than your Security Reviewer. Just like you wouldn't ask a frontend developer to design your database schema, you shouldn't ask a general-purpose AI to optimize animation frame budgets.
TDD with Copilot works because tests catch what AI gets wrong. The year rollover bug. The JSDOM configuration. These weren't hypothetical failures — they were real bugs caught by real tests before any user saw them.
The workflow also embraces iteration. Reddington built a basic countdown, added timezones, implemented fireworks, then created a separate contribution graph theme. He later unified both into an open source countdown app called Timestamp with a centralized theme orchestrator.
Rome wasn't built in a day. You don't need everything on day one.
Try It Yourself
If you want to experiment with these patterns, start with context window management. Next time you're working with GitHub Copilot (or any AI coding tool), try this:
- Start a chat session for initial setup and requirements
- Once you have working code, start a NEW chat session for the next feature
- Reference custom instruction files or saved plans instead of dragging old conversation history
For Plan agent workflows in VS Code:
- Use
@planin Copilot Chat when requirements are vague - Answer its clarifying questions honestly (including "I don't know yet")
- Save the plan output to a Markdown file as your source of truth
- Start a new chat session for implementation, referencing the plan file
For TDD with Copilot:
- Ask Copilot to write tests first based on expected behavior
- Run the tests and confirm they fail (red state)
- Ask Copilot to implement code to make tests pass
- Let Copilot access terminal output to see test failures and iterate
You can also explore custom agents. The awesome-copilot repository has examples for different specializations. Or check out how GitHub Copilot CLI brings agentic workflows to your terminal for command-line focused development.
Reddington's live countdown app and source code are available. Fork it, star it, contribute a new theme.
The Bottom Line
Use this workflow if you're building with AI coding tools and frustrated by cluttered conversations, vague requirements, or bugs that slip through. Skip it if you're doing quick prototypes where quality doesn't matter or you're already happy with your AI workflow.
The real risk is treating AI chat like a continuous conversation instead of a series of focused sessions. You'll get worse outputs, drag irrelevant context into new problems, and miss edge cases that Plan agent would have caught.
The real opportunity is combining these patterns: context management + Plan agent + custom agents + TDD. Each technique amplifies the others. Fresh context makes Plan agent questions sharper. Custom agents give better implementation details. TDD catches what AI gets wrong.
This isn't theoretical. Reddington built a working app live on stream, bugs and all. The world map rendered as abstract art. Tests caught year rollover bugs. Requirements evolved based on viewer input. That's real development, not a polished demo.
What will you build with these patterns? The techniques work whether you're building a countdown app or something entirely different. The workflow is the point, not the project.
Source: GitHub Blog