WARP

Claude Code's Team Development Era Begins — A Beginner's Deep Dive into /simplify and /batch in v2.1.63

2026-03-02濱本 隆太

A beginner-friendly breakdown of Claude Code v2.1.63's new /simplify and /batch commands. Covers how multi-agent team development works, git worktree usage, and practical implementation — explained from the ground up.

Claude Code's Team Development Era Begins — A Beginner's Deep Dive into /simplify and /batch in v2.1.63
シェア

This is Ryuta Hamamoto from TIMEWELL. Today I want to introduce something on the technical side.

Now that AI writing code has become the norm, "AI as a tool for individual productivity" is hardly novel anymore. But how to actually harness AI for large-scale projects? Most developers are still figuring that out.

Against that backdrop, the AI coding tool Claude Code has taken an interesting turn — evolving from a tool where you direct a single AI assistant to one where you orchestrate multiple AI agents working as a team.

The headline features of version 2.1.63 [^1], released February 28, 2026, are two new commands: /simplify and /batch. I see this as more than a useful feature addition — it's an inflection point where the development workflow itself shifts from "solo" to "team."

I'll explain everything from the basics, in a way that's accessible even if you've never touched Claude Code before.

What Is Claude Code?

Claude Code is an AI assistant that works interactively in your terminal — the black screen that engineers use every day — to help you develop [^2].

What sets it apart from a typical chatbot is that Claude Code operates as an "agent" — actively doing things rather than just responding. It doesn't just answer questions and show you code snippets; it actually edits files, runs commands, and handles Git operations itself [^3].

For example, if you say "Open main.py and add a function called hello," it will actually rewrite the file. Say "Run the tests and show me the results," and it'll execute the test command and display the output. "Check out a new branch and commit these changes" — also possible.

Traditional development meant a cycle of "find which file has the information," "figure out how to fix it," and "type the command." Claude Code changes this. Developers can focus on communicating "what they want to accomplish," freed from routine tasks to concentrate on the creative work of "what to build."

This "agent" concept is Claude Code's defining characteristic — and with this update, these agents have started coordinating as a team rather than working alone. That's the core of it.

The "Agent Team" Concept That Upends Development Norms

Let me start with a striking number. Claude Code's lead developer, Boris Cherny, shared data publicly on X in December 2025.

"In the last 30 days, I've landed 259 PRs. 497 commits, 40,000 lines added, 38,000 deleted. Every single line was written by Claude Code and Opus 4.5." [^4]

259 PRs in 30 days. That works out to roughly 8.6 completed features or fixes per day — a figure even seasoned engineers would struggle to hit, produced entirely by AI-written code.

What's interesting is that Boris's setup is, in his words, "surprisingly vanilla." He runs five instances in the terminal and five to ten sessions in the browser simultaneously, with system notifications for sessions needing input. No special hacks — just disciplined workflow execution consistently applied. Active sessions for 46 of 47 days, with the longest session running around 18 hours. At this scale, the boundary between human and AI starts to blur.

In February 2026, Boris appeared on the podcast "Lenny's Podcast" and said something even more pointed:

"At this point, coding is basically a solved problem." [^5]

It's a statement that's easy to misread — he's not saying humans no longer need to write code. What he means is that the act of "writing" code itself is now largely automatable by AI, and so the developer's role is shifting toward something closer to an architect or product manager: deciding what to build, directing AI, and supervising the whole.

In practice, Anthropic has seen engineer productivity increase 200% per person internally, and 4% of all public GitHub commits are generated by Claude Code [^5] [^6].

What powers this productivity isn't relying on one super-agent but coordinating multiple agents — an "agent team" where complex tasks are broken into smaller subtasks and processed in parallel by multiple AIs. Vastly faster than sequential work.

The technical foundation that made this possible was native git worktree support [^7].

The "git worktree" Revolution

Let me explain this for those unfamiliar with it.

Normally, when you want to look at a different Git branch, you have to commit or stash your current work, switch branches, and switch back when you're done. A minor but real annoyance.

git worktree solves this: from a single repository, you can create multiple working directories, each linked to a different branch, all at the same time [^8].

A helpful analogy: imagine a large kitchen with a dedicated Chinese cooking station, an Italian station, and a Japanese station — all set up simultaneously. Each station is independent; stir-frying next to boiling pasta causes no interference. The refrigerator and pantry, however, are shared.

In development terms: you can work on the feature-auth branch in worktree-A while simultaneously working on bugfix-123 in worktree-B. Because the directories are physically separate, no branch switching is required.

Claude Code uses this mechanism internally, giving each AI agent its own independent working environment. Parallel work without conflicts. The /batch command is built on exactly this foundation.

Usage is simple — just launch with the --worktree flag:

# Create a worktree named "feature-auth" and launch inside it
claude --worktree feature-auth

A directory is created at .claude/worktrees/feature-auth in the project root, and Claude Code starts an independent session there. You can hand new feature development or experimental changes to AI without touching your main working directory [^7] [^8].

This "separation of workspaces" was the technical foundation that made /simplify and /batch possible.

Looking for AI training and consulting?

Learn about WARP training programs and consulting services in our materials.

/simplify — An "AI Review Team" That Automates Pre-PR Quality Checks

Code review is an indispensable step for maintaining quality, but honestly, it's rarely a pleasure. Checking style guide minutiae, looking for more efficient approaches, catching typos — most developers know the experience of review time being consumed by these non-essential concerns.

/simplify takes this problem on directly [^9].

What Does /simplify Do?

It launches multiple AI agents simultaneously against your code and runs checks in parallel from three distinct perspectives [^10]:

Review Perspective What It Checks
Code Quality Readability, maintainability. Variable naming, function complexity, etc.
Code Efficiency Performance issues. Redundant loops, inappropriate data structures, etc.
CLAUDE.md Compliance Adherence to project-specific coding conventions

Think of it as three reviewers — a quality specialist, a performance specialist, and a standards specialist — all looking at your code simultaneously.

Practical Usage

Usage is almost anticlimactically simple. Just append "then run /simplify" to your code change instruction:

> hey claude, add timeout handling to this API client. then run /simplify

Claude Code first implements the timeout handling, then automatically runs /simplify upon completion, beginning the three-perspective review. If improvements are found, the AI corrects the code itself, delivering a more polished result.

Personally, I recommend using it as the "last line of defense" before submitting a PR. Running an AI self-review first eliminates obvious issues upfront, freeing human reviewers to focus on the more intellectually demanding discussion of design and logic validity.

A Closer Look at the Three Perspectives

Code quality means "code that future you, or another team member, can immediately understand and safely modify." /simplify might suggest renaming vague variables like data or temp to user_list or customer_name, propose splitting a massive hundred-line function into smaller pieces, or recommend turning a magic number like if (status == 2) into a constant: const STATUS_APPROVED = 2;.

Code efficiency means whether the code uses CPU and memory wisely. It finds unnecessary work — like repeating the same calculation inside a loop — and moves it outside, or suggests replacing a list with a hash map for searching large datasets.

CLAUDE.md compliance needs a bit of explanation. CLAUDE.md is a config file you place at your project root. Write your team's specific rules there, and Claude Code reads and follows them [^11].

# CLAUDE.md

## Coding Standards

- All public functions must include documentation comments.
- Line length must not exceed 80 characters.
- When adding an API endpoint, always update the OpenAPI spec.

With this in place, when /simplify runs, the AI will automatically flag "missing documentation comment," "this line exceeds 80 characters," and "OpenAPI spec hasn't been updated" — and fix them. Write down your team's implicit knowledge, and the AI becomes the rules enforcer.

How It Differs from the Old Plugin

A similar feature was available before via /plugin install code-simplifier. When Boris was asked on Threads whether /simplify was the same as code-simplifier, he replied: "No, it's a bit more advanced." [^10]

The old plugin was a single-agent architecture focused on "improving code clarity, consistency, and maintainability." The new /simplify is a "team" architecture — multiple agents with distinct specializations (quality, efficiency, conventions) running in parallel. That structural shift from single to team is the substance of "a bit more advanced."

/batch — An "AI Migration Team" That Automates Large-Scale Code Changes

If /simplify is a "review team" that polishes individual code changes, /batch is a "migration team" that executes large-scale changes spanning an entire project in an organized way.

Migrating from an old library to a new one. Adding TypeScript type definitions to every file. Company-wide code changes following a standards update. Each individual change might be simple, but when hundreds or thousands of files are involved, the time investment is enormous — and manual work introduces mistakes. For development teams, these kinds of tasks have long been a persistent headache.

/batch automates "apply the same pattern of changes to a large number of files" by mobilizing dozens of AI agents to work in parallel [^10].

What's Happening Inside /batch

When executed, it proceeds through four autonomous phases [^12]:

Phase What Happens
1. Planning The main agent interacts with you to develop a migration plan. Confirms target directories, changes to apply, files to exclude, etc.
2. Parallel Execution Tasks are broken down based on the plan, and dozens of subagents are launched. Each is assigned a set of files to handle
3. Complete Isolation Each subagent works in its own independent git worktree environment. Agents don't interfere with each other
4. Testing and PR Each subagent autonomously runs tests and, on success, creates a PR automatically. On failure, it attempts fixes or asks for human help

All of this happens automatically — and in parallel. Once you've approved the initial plan, you simply review and merge the PRs as they come in. A migration that might have taken weeks could potentially finish in hours.

A Concrete Use Case

The official example Boris provided is migrating a frontend framework from Solid to React [^10]:

> /batch migrate src/ from Solid to React

That single command triggers an AI team to rewrite all Solid syntax to React across every file under src/.

Another application: enabling TypeScript strict mode. Turning on "strict": true in tsconfig.json typically produces a flood of type errors — but you can use /batch to assign files to subagents for parallel fixes:

> /batch add TypeScript strict types to src/

It's equally applicable for bulk lint error fixes after introducing new rules, or parallel test code generation.

Handling Conflicts

Parallel work naturally raises the question of conflicts. But because /batch is built on physical file-space isolation via git worktree, most conflicts are structurally prevented — each subagent's file set is fundamentally independent.

The exception is when multiple subagents try to edit the same shared file — config files, common utilities, and the like. In those cases, the main agent or a human needs to resolve the conflict at the PR merge stage.

In other words, /batch is most powerful for tasks that match the pattern: "apply the same change to many files with minimal inter-file dependencies."

Building Your Own Batch-Style Workflow

The principles behind /batch can be applied manually. Simply add isolation: worktree to the frontmatter of a custom agent definition file, and that agent will run in an isolated git worktree environment [^7]:

# .claude/agents/test-runner.md
---
name: test-runner
model: haiku
isolation: worktree
---

You are an expert test engineer.
Write and execute test code for the specified files.

With this configuration, you can design custom batch workflows — like "add test code in parallel for every service file in the project." This flexibility is what makes Claude Code feel less like a tool and more like a platform that extends a developer's creative capabilities.

Works Without Git Too

In many Japanese enterprise environments, Subversion or Perforce are still in active use. Worktree isolation may look git-specific, but users of Mercurial, Perforce, and SVN can also benefit. By defining WorktreeCreate and WorktreeRemove hooks, you can build isolated environments without git.

The fact that teams on legacy version control systems don't have to give up /batch's parallel processing is a quietly significant detail — it meaningfully lowers the enterprise adoption barrier.

Pitfalls and Gotchas to Know Upfront

Powerful features come with their own set of traps. All of these are avoidable with a little foreknowledge, so here's a consolidated summary.

Context Window Consumption

The source of Claude Code's intelligence is "context" — the amount of information the AI can load at once. User instructions, current code, related files, tool descriptions. All of it must fit within the context window.

The problem with external tool integrations: when you enable MCP servers (which connect Claude Code to external APIs and services), the text describing how to use those tools takes up context. One developer reported that simply enabling 13 MCP servers consumed 82,000 tokens — roughly 41% of the context window — before a single instruction was sent [^13].

In that state, launching dozens of subagents via /batch leaves almost no room for the AI's actual reasoning, leading to performance degradation and unexpected errors.

Two mitigations: first, temporarily disable MCP servers not needed for the /batch task. Second, benefit from Anthropic's "Tool Search" feature [^14] — when tool definitions exceed 10% of context, the system automatically switches from loading all tool descriptions to having the AI search for the tools it needs. Still, the more servers you have, the more overhead there is; conscious trimming is worthwhile.

Cleaning Up git worktrees

Worktrees created by /batch or claude --worktree are cleaned up when the session ends [^8]. Good to understand the behavior:

If there are no changes, the worktree and branch are automatically deleted. If there are changes or commits, you'll be asked whether to keep or delete them. Choosing "delete" discards any uncommitted changes, so be careful.

Since worktrees are created under .claude/worktrees/ in the project root, add this to your .gitignore or they'll show up as untracked files in git status:

.claude/worktrees/

Session History Management

Deleting a worktree also deletes the Claude Code session history from inside it. If you might want to revisit the conversation — especially for complex changes or sessions that contain important decision-making context — keep the worktree or record the key points separately.

An Early Bug and Workaround (Now Fixed)

As a side note: right after worktree support launched, a bug was reported where Claude Code would create worktrees automatically even without the --worktree flag [^7]. Boris fixed it quickly and shared a temporary workaround — adding EnterWorktree to the deny list in /permissions. It's been fixed, but knowing that /permissions lets you configure permissions when a tool behaves unexpectedly is worth filing away.

Other Changes in v2.1.63

There are improvements in v2.1.63 that get overshadowed by /simplify and /batch but meaningfully improve the development experience [^1].

HTTP hooks have been added, enabling direct HTTP POST to URLs without going through shell scripts. CI/CD pipeline triggers and Slack notification integrations no longer require scripts.

Project configs and auto memory are now shared across git worktrees — the underlying technology of /batch itself. CLAUDE.md settings carry over between worktrees, so each subagent works with a correct understanding of project conventions from the start.

The ENABLE_CLAUDEAI_MCP_SERVERS=false environment variable has been added, allowing opt-out of MCP servers loaded through claude.ai. Given the context window consumption issue discussed above, this is a practical addition for explicitly disabling servers you don't need.

The /model command is improved — the currently active model name now appears in the slash command menu, helping prevent accidentally working with an unintended model.

Long-session stability has also improved, with multiple memory leak fixes in core areas: git root directory detection, JSON parsing, and settings menus. A welcome improvement for heavy users who keep Claude Code running all day.

A bug with the /clear command has been fixed — clearing conversation history no longer leaves stale cached Skills behind. You can start fresh tasks from a genuinely clean state.

Summary: Your Development Is Now a Team Sport

The change that /simplify and /batch bring isn't merely automating tasks. The relationship between developer and AI shifts from a one-to-one "director and executor" dynamic to a one-to-many "manager and specialist team" dynamic. The workflow shifts from "give an instruction and wait" to "approve the plan and let the team run."

Honestly, I don't think AI will replace every task. But having a team of AI agents handle pre-review quality checks and large-scale routine work means we humans can concentrate on the work that genuinely needs our thinking. That's significant.

Five things you can do starting today:

  1. Run claude update to upgrade to v2.1.63
  2. Try /simplify on your next PR — just append "then run /simplify" to your code change instruction
  3. Get a feel for /batch with a small bulk change — lint fixes or comment updates work fine, no need to start with a framework migration
  4. Document your team's conventions in CLAUDE.md — since /simplify checks CLAUDE.md compliance, writing them down doubles the value
  5. Build a custom agent with isolation: worktree — the first step toward designing a batch workflow tailored to your own project

The era of a single developer leading an AI agent team to build products has already begun.


At TIMEWELL, the WARP program teaches you how to put the latest AI development tools like Claude Code to work in practice. If you want to master agent-style tools and lift your whole team's development productivity, reach out for a conversation — we'd love to help.

References

[^1]: Claude Code v2.1.63 Release Notes. (2026, February 28). GitHub. https://github.com/anthropics/claude-code/releases/tag/v2.1.63

[^2]: Claude Code Docs. Claude Code. https://code.claude.com/docs/en

[^3]: Common workflows. Claude Code Docs. https://code.claude.com/docs/en/common-workflows

[^4]: Cherny, B. (2025, December). In the last 30 days, I've landed 259 PRs... X. https://x.com/bcherny/

[^5]: Cherny, B. (2026, February). The head of Claude Code on how AI is changing software development. Lenny's Podcast. https://www.lennysnewsletter.com/p/head-of-claude-code-on-how-ai-is-changing-software-development-boris-cherny

[^6]: Waydev. (2026, February). 8 Game-Changing Insights from Anthropic's Head of Claude Code, Boris Cherny. https://waydev.co/8-game-changing-insights-from-anthropic-claudecode-boris-cherny/

[^7]: Cherny, B. (2026, February 21). Introducing built-in git worktree support for Claude Code... Threads. https://www.threads.net/@boris_cherny/post/DVAAnexgRUj/

[^8]: Run parallel Claude Code sessions with Git worktrees. Claude Code Docs. https://code.claude.com/docs/en/common-workflows#run-parallel-claude-code-sessions-with-git-worktrees

[^9]: Njenga, J. (2026, February). How I'm Using Claude Code (New) /simplify and /batch (To x10 My Code Reviews). Medium. https://medium.com/@joe.njenga/how-im-using-claude-code-new-simplify-batch-to-x10-my-code-reviews-888780a6a42a

[^10]: Cherny, B. (2026, February 28). In the next version of Claude Code, we're introducing two new skills... Threads. https://www.threads.net/@boris_cherny/post/DVR-HzBkqRd/

[^11]: Store instructions and memories. Claude Code Docs. https://code.claude.com/docs/en/store-instructions-and-memories

[^12]: Dramatic_Squash_3502. (2026, February). What's new in CC 2.1.63 system prompts (+4,200 tokens). Reddit. https://www.reddit.com/r/ClaudeAI/comments/1rh7lk8/whats_new_in_cc_2163_system_prompts_4200_tokens/

[^13]: Reddit community discussion on MCP context window consumption. (2026, February). Reddit r/ClaudeAI.

[^14]: Extend Claude Code - Tool Search. Claude Code Docs. https://code.claude.com/docs/en/extend-claude-code

Considering AI adoption for your organization?

Our DX and data strategy experts will design the optimal AI adoption plan for your business. First consultation is free.

Share this article if you found it useful

シェア

Newsletter

Get the latest AI and DX insights delivered weekly

Your email will only be used for newsletter delivery.

無料診断ツール

あなたのAIリテラシー、診断してみませんか?

5分で分かるAIリテラシー診断。活用レベルからセキュリティ意識まで、7つの観点で評価します。

Learn More About WARP

Discover the features and case studies for WARP.