Skip to main content

How Claude Code Creator Boris Cherny Actually Uses Claude Code: 40 Productivity Secrets Revealed

How Claude Code Creator Boris Cherny Actually Uses Claude Code: 40 Productivity Secrets Revealed

Boris Cherny invented Claude Code. He runs it as Head of Claude Code at Anthropic. And between January and February 2026, he opened the hood and showed exactly how he uses it every day across 40 tips in a four-part series.

The developer community's reaction? "Developers are losing their minds."

This post distills every major insight from Boris's public tips, Anthropic webinars, the InfoQ interview, Lenny's Newsletter feature, and the official How Anthropic Teams Use Claude Code blog post. If you use Claude Code — or plan to — this is the closest thing to a masterclass from the person who built it.


Who Is Boris Cherny?

Boris Cherny is a Member of Technical Staff at Anthropic Labs and the creator of Claude Code. He currently serves as Head of Claude Code, making him the person most responsible for Claude Code's direction, design philosophy, and internal adoption at Anthropic.

One detail stands out. Boris briefly left Anthropic to join Cursor (Anysphere) in a senior role — and returned to Anthropic within just two weeks. The Information confirmed this. It is a vivid signal of how strongly he believes in Claude Code's trajectory.

His philosophy on Claude Code's design is intentional:

"Claude Code is designed to be low-level and unopinionated so that users can use it, customize it, and hack on it in whatever way works for them."

That means there is no single correct way to use Claude Code. Every member of the Claude Code team uses it differently. Boris's tips are not prescriptions — they are a window into one expert's workflow.


The 3 Core Principles Behind Boris's Workflow

Before diving into tactics, three principles shape everything Boris does:

1. Parallelization is the single biggest productivity lever. Running multiple Claude sessions simultaneously — up to 15 at once — is the one change Boris says delivers the most impact.

2. Slow is fast. Using the largest, most capable model (Opus 4.6) and spending time in Plan mode before implementation reduces total time by eliminating mid-task corrections.

3. Verification loops determine quality. Giving Claude a way to verify its own work is Boris's single most important tip. It is what separates 2x output from 2-3x output quality.


Part 1: Parallel Sessions — The Highest-Impact Workflow Change

How Boris Runs Up to 15 Claude Sessions Simultaneously

Boris does not run one Claude session. He runs many.

His setup:

  • 5 terminal sessions — each on a separate git checkout of the same repository
  • 5-10 web sessions — running at claude.ai/code
  • Total: up to 15 concurrent Claude sessions

Each terminal tab is numbered 1 through 5. He configures iTerm2 notifications so that when Claude is waiting for input, he gets an immediate alert and can switch to that tab instantly.

The web sessions complement the terminal sessions. The & command and the --teleport flag allow switching between local terminal and web sessions. He can start a session on iOS in the morning and pick it up on desktop later.

Why Parallelization Works

While one Claude session is running a long task, Boris is already working in another session on something else. When the first session completes and requests input, he reviews it, gives direction, and jumps back to the next task.

The mental model is a pit crew, not a solo mechanic. Every Claude session is a worker. Your job becomes coordination, review, and steering — not waiting.

claude --worktree: Native Parallel Isolation (Launched February 2026)

Part 4 of Boris's tips, published February 20, 2026, introduced a major new capability: native git worktree support in Claude Code.

Available in Claude Code v2.1.49 (released February 19, 2026), the --worktree flag runs each Claude session in an isolated git worktree automatically. No code conflicts between parallel sessions. No stepping on each other's changes.

# Run Claude in an isolated worktree
claude --worktree

# Name the worktree explicitly
claude --worktree feature-auth

# Combine with tmux
claude --worktree --tmux

Shell aliases (za, zb, zc) enable fast switching between worktrees. Apply isolation: worktree to subagents for parallel batch work and large-scale code migrations.

Real-world impact: Submit a PR in one session. Debug a bug in a second. Start a code review in a third. All simultaneously, with zero conflicts.


Part 2: Model Selection and Plan Mode — Why Slow Is Fast

Boris Uses Opus 4.6 for Everything

Boris started with Opus 4.5 with Thinking mode for all coding tasks. When Opus 4.6 launched, he upgraded immediately. His assessment:

"I've been using Opus 4.6 for a bit — it is our best model yet. It is more agentic, more intelligent, runs for longer, and is more careful and exhaustive." — Boris Cherny, X (official account)

Opus 4.6 is slower per response than Sonnet. Boris knows this and uses it anyway. His reasoning:

"The bigger model requires less steering and is better at tool use, which means it's almost always faster overall." — Boris Cherny, InfoQ interview

Fewer mid-task course corrections mean lower total time to completion. He selects High effort for every task via the /model command.

The counterintuitive truth: The model that takes longer per token often takes less total wall-clock time because it gets things right without requiring human re-intervention.

Plan Mode: Always Start Here for Complex Work

For any complex task — especially writing a PR — Boris activates Plan mode first.

How to activate: Press Shift+Tab twice.

The workflow:

  1. Enter Plan mode
  2. Iterate with Claude on the plan until it is exactly right
  3. Switch to auto-accept edits mode
  4. Claude completes the implementation — typically in one shot

"A good plan is really important!" — Boris Cherny

When something goes wrong during implementation, Boris loops back to Plan mode automatically. The invest-in-the-plan strategy eliminates most rework. One solid plan equals one-shot implementation.


Part 3: CLAUDE.md — Building a Compounding Knowledge Engine

What CLAUDE.md Is

CLAUDE.md is a Markdown file that Claude Code loads automatically at the start of every session. Boris and the Anthropic team treat it as their team's compounding knowledge engine.

How Boris Uses CLAUDE.md

  • One file, checked into git — shared automatically across the entire team
  • Updated every time Claude makes a mistake — the error gets documented immediately
  • The same mistake never happens twice

The prompt Boris uses after any correction:

Update CLAUDE.md so you don't make this mistake again.

Claude is good at writing rules for itself. Over time, CLAUDE.md becomes a self-reinforcing system that makes Claude progressively smarter about your specific codebase and team conventions.

PR Review Integration

When reviewing PRs, using @.claude tags triggers automatic additions to CLAUDE.md. A GitHub Action automates this entirely — what Boris calls compound engineering.

"Every correction becomes permanent context."

What to Put in CLAUDE.md

  • Code style conventions
  • Architecture decisions
  • Preferred libraries and frameworks
  • PR checklists
  • Recurring mistake prevention rules
  • Design guidelines specific to your project

Edit CLAUDE.md ruthlessly over time. Keep only what matters. The file should be dense with high-signal information, not bloated with rarely-relevant notes.


Part 4: Slash Commands, Subagents, and Hooks — The Automation Triad

Slash Commands: 1-Shot Automation for Repetitive Workflows

Every internal loop workflow that Boris performs multiple times per day becomes a slash command.

Storage location: .claude/commands/ directory, checked into git, automatically shared with the entire team.

Boris's most-used slash commands:

Command Function Frequency
/commit-push-pr Pre-calculates git state, then commits, pushes, and creates PR Dozens of times per day
/techdebt Detects and removes duplicate code Periodic
/review-pr Automates PR code review Every PR

The key insight: slash commands eliminate the cognitive overhead of remembering multi-step processes. Boris defines the workflow once and runs it repeatedly without thinking.

Subagents: Specialized Parallel Processing

Adding "use subagents" to any prompt expands Claude's parallel processing capability. Subagents keep the main context clean and run individual tasks in isolation.

Subagents the Anthropic team runs in production:

  • code-simplifier — cleans and refactors code
  • verify-app — generates end-to-end test instructions
  • build-validator — automates build validation

Custom subagents live in .claude/agents/. Each gets its own Markdown file with configurable name, color, tool set, permissions, and model.

Hooks: The Automation Backstop

PostToolUse hooks run automatically after every code edit. Boris uses them to trigger auto-formatting — a backstop that prevents CI failures.

90% of the time, auto-formatting is unnecessary. In edge cases, it catches the failures that would otherwise break CI and cost time.

Advanced hook applications:

  • Route permission requests to Slack or to the latest Opus model for approval
  • Run continuation logic at the end of each Claude turn
  • Automate logging before and after every tool call

Permissions: Safe Automation Without Skipping Guards

Use /permissions to pre-approve safe commands. This is the alternative to --dangerously-skip-permissions.

Wildcard syntax enables precise control:

Bash(bun run *)
Edit(/docs/**)

Store permissions in .claude/settings.json and check it into git. The entire team gets the same permission configuration automatically.


Part 5: Verification Feedback Loops and MCP Integration

The Single Most Important Tip

Of all 40 tips, Boris identifies this one as the most important:

"Giving Claude a way to verify its work improves output quality 2-3x." — Boris Cherny

Verification methods by domain:

Domain Verification Method
Backend Run bash commands, execute test suites
Frontend Chrome extension for browser navigation, automated UI iteration
Mobile iOS and Android simulators
Distributed systems Docker log analysis

When Claude completes a task and has a verification method, it checks its own work. If it finds an error, it fixes and re-verifies — without human involvement. This autonomous feedback loop is the core mechanism that enables consistently high-quality output at scale.

MCP Integration: Zero Context Switching

Claude Code connects directly to external tools through MCP (Model Context Protocol). Boris's team uses it to eliminate context switching entirely.

Slack MCP: Feed a bug thread directly to Claude. One command — "Go fix the failing CI tests" — handles everything without micromanagement.

BigQuery MCP: Analyze metrics using the BigQuery CLI (bq). Boris has not written SQL directly in over six months.

Sentry MCP: Automatically analyze error context and identify root causes.

All MCP server configurations are checked into git. Every team member gets the same tool environment automatically.

Claude Code's official documentation supports integrations with Google Drive, Jira, Slack, GitHub, and other major tools out of the box.

Additional High-Leverage Tactics

Data analysis: "Analyze last quarter's conversion rate trends" → BigQuery MCP executes immediately.

Learning mode: Enable 'Explanatory' or 'Learning' style in /config. Claude explains every code change as it works, functioning as a coding coach.

Visualization: For unfamiliar code, ask Claude to generate HTML slides or ASCII diagrams. Understanding a codebase visually accelerates onboarding.

Voice input: On macOS, press Fn twice to activate dictation. Boris uses this for detailed prompts — roughly 3x faster than typing.


Anthropic Team Results in Practice

Boris's methods are not theoretical. The Anthropic team applies them across functions, and the results are concrete:

Team Use Case Outcome
Security Engineering Stack trace analysis 3x faster diagnosis
Data Infrastructure Kubernetes cluster troubleshooting Resolved within 20 minutes
Growth Marketing Auto-generate ad variants Hundreds of variants in minutes
Legal Phone auto-response prototype Built directly by non-engineers
Data Science React visualization app Built by a TypeScript beginner

One cultural shift stands out: the old workflow of "design doc → quick-and-dirty code → refactor → skip tests" has shifted to genuine Test-Driven Development. Claude makes TDD practical for teams that previously found it too slow.

Claude Code by the Numbers (SemiAnalysis, February 2026)

  • 4% of all public GitHub commits currently use Claude Code
  • 20%+ projected by end of 2026
  • Daily active users doubled month-over-month in January 2026
  • Top Spotify engineers report writing zero lines of code manually since December

Full Customization: What Boris Has Available

Claude Code supports 37 configuration settings and 84 environment variables. Key customization areas:

Terminal: Boris recommends Ghostty for synchronous rendering, 24-bit color, and full Unicode support.

Status line: Use /statusline to display current model, directory, remaining context, and cost.

Key bindings: Remap everything via /keybindings. Configurations load in real time.

Output style: Choose between Explanatory, Learning, and Custom modes.

Plugins: Install LSP integrations, MCP servers, skills, agents, and hooks via /plugin.

Check settings.json into git. Every team member gets an identical environment with zero setup.


The 5 Principles: An Actionable Summary

Boris's 40 tips reduce to five operating principles:

Principle 1: Parallelize immediately. Open three git worktrees right now. Launch three Claude sessions. This is the single highest-impact change available to any Claude Code user.

Principle 2: Plan first, implement once. For every complex task, start in Plan mode (Shift+Tab twice). A good plan produces a one-shot implementation. A missing plan produces rework.

Principle 3: Make CLAUDE.md your team's second brain. Every time Claude makes a mistake, add it to CLAUDE.md. Over time, Claude gets progressively smarter about your specific context. This compounds.

Principle 4: Always give Claude a way to verify. Connect a test suite, a browser test, or a CI pipeline. This is Boris's most important tip. It is what moves results from good to 2-3x better.

Principle 5: Experiment with your own workflow. Boris's setup is, by his own description, "surprisingly vanilla." Claude Code is built to be customized. Find the workflow that fits how you think and work.


What Comes After Coding Is Solved?

Boris's most provocative claim is that coding is solved. He is not focused on coding anymore — he is focused on what comes next.

The growth numbers give his claim weight. At 4% of public GitHub commits today and a projected 20% by year-end, Claude Code's adoption curve is steep. The developers at the frontier are already reporting that they write no code manually.

The question Boris is asking is the right one: when AI handles implementation, what becomes the scarce, high-value human contribution?

The answer shapes how teams should think about Claude Code today — not as a tool that helps you write code faster, but as infrastructure that transforms what "working on software" means.


Frequently Asked Questions

What model does Boris Cherny use in Claude Code? Boris uses Claude Opus 4.6 for all tasks. He previously used Opus 4.5 with Thinking mode and upgraded to Opus 4.6 when it launched. He selects High effort via the /model command.

How many Claude Code sessions does Boris run simultaneously? Up to 15: 5 terminal sessions (each on a separate git worktree) and 5-10 web sessions at claude.ai/code.

What is claude --worktree and when was it released? claude --worktree is a native git worktree flag released in Claude Code v2.1.49 on February 19, 2026. It runs each Claude session in an isolated worktree, preventing code conflicts across parallel sessions.

What is Boris's single most important Claude Code tip? Providing Claude with a verification method — a test suite, browser test, or CI pipeline. He states this improves output quality 2-3x compared to working without verification.

What goes in CLAUDE.md? Code style conventions, architecture decisions, preferred libraries, PR checklists, and rules preventing recurring mistakes. Update it every time Claude makes a mistake. Edit it ruthlessly to keep only high-signal content.

Why does Boris use the slowest model (Opus 4.6) instead of Sonnet? Because Opus 4.6 requires less steering and delivers better tool use. Fewer mid-task corrections mean lower total time to completion, even though individual responses are slower.

Does Boris write SQL directly? He has not written SQL directly in over six months. BigQuery MCP handles all data queries.


Sources

Popular posts from this blog

5 Game-Changing Ways X's Grok AI Transforms Social Media Algorithms in 2026

5 Game-Changing Ways X's Grok AI Transforms Social Media Algorithms in 2026 In January 2026, X (formerly Twitter) fundamentally reshaped social media by integrating Grok AI—developed by Elon Musk's xAI—into its core algorithm. This marks the first large-scale deployment of Large Language Model (LLM) governance on a major social platform, replacing traditional rule-based algorithms with AI that understands context, tone, and conversational depth. What is Grok AI? Grok AI is xAI's advanced large language model designed to analyze nuanced content, prioritize positive and constructive conversations, and revolutionize how posts are ranked and distributed on X. Unlike conventional algorithms, Grok reads the tone of every post and rewards genuine dialogue over shallow engagement. The results are striking: author-replied comments now receive +75 ranking points —150 times more valuable than a single like (+0.5 points). Meanwhile, xAI open-sourced the Grok-powered algorithm in Ru...

How Claude Opus 4.6 Agent Teams Are Revolutionizing AI Collaboration

Imagine delegating complex tasks not to a single AI, but to a coordinated team of specialized AI agents working in parallel. Anthropic's Claude Opus 4.6, unveiled on February 5, 2026, makes this reality with Agent Teams —a groundbreaking feature where multiple AI instances collaborate like human teams, dividing roles, communicating directly, and executing tasks simultaneously. As someone deeply engaged with AI systems, I found this announcement particularly compelling. Agent Teams represent a fundamental shift from solitary AI execution to collaborative multi-agent orchestration, opening new possibilities for tackling complex, multi-faceted problems. How AI Agent Teams Actually Work The architecture of Agent Teams is surprisingly intuitive—think of it like a project team in a company. At the top sits the Team Lead , an Opus 4.6 instance that oversees the entire project, breaks down tasks, and coordinates distribution. Below the Lead are Teammates , each running as indepen...

AI Agents Hit Reality Check: 5 Critical Insights from the 2026 Trough of Disillusionment

AI agents are everywhere in 2026. Gartner predicts 40% of enterprise applications will embed AI agents by year-end—an 8x jump from less than 5% in 2025. But here's the uncomfortable truth: generative AI has already plunged into the "Trough of Disillusionment," and AI agents are following the same path. While two-thirds of organizations experiment with AI agents, fewer than one in four successfully scales them to production. This isn't just another hype cycle story. It's a critical turning point where ROI matters more than benchmarks, and the ability to operationalize AI determines winners from losers. The Hype Cycle Reality: Where AI Agents Stand in 2026 According to Gartner's Hype Cycle for AI 2025, AI agents currently sit at the "Peak of Inflated Expectations"—the highest point before the inevitable crash. Meanwhile, generative AI has already entered the Trough of Disillusionment as of early 2026. What does this mean for enterprises? Gartner fo...