This is a follow-up to why I switched from Cursor to Claude Code. That post covered the basics - context management, plan files, skills. This one covers two workflows I’ve built on top of Claude Code that have changed how I work day-to-day.
”Start on:” and git worktrees
Git worktrees let you check out multiple branches of the same repo into separate directories, sharing git history but keeping code changes isolated. Most people don’t use them because the mental overhead isn’t worth it for solo work. With Claude Code, the setup cost disappears. Claude Code now has built-in worktree support in the CLI, and there’s good coverage of the pattern from incident.io, Steve Kinney, and Developers Digest.
I have a convention: when I start a message with “start on:”, Claude automatically sets up everything for a new task. No confirmation, no questions. It pulls latest main, creates a branch, creates a worktree alongside the main repo, pushes the branch, and opens a draft PR. All before writing a single line of code.
The worktree naming follows a pattern: reponame-branchsuffix. So if I’m fixing date validation in the orders API, the worktree lands at orders-api-fix-date-validation right next to the main repo directory. Predictable, easy to find, easy to clean up.
Why worktrees matter for AI development
The real value is parallel sessions. Each worktree can run its own Claude Code instance with its own context. I can have Claude working on a feature in one terminal while I fix a bug in another. Neither session interferes with the other - no stashing, no branch switching, no lost context.
This also makes PR review easier. Instead of switching branches in your main worktree and losing whatever state you had, you create a review worktree from the PR branch. Check the code, run the tests, delete the worktree when you’re done. Your main workspace stays untouched.
Automated PR review pipeline
The second workflow triggers when a PR is created. Claude runs a 3-step review process automatically - no prompting needed.
Step 1: Mechanical checks
TypeScript type checking, ESLint, and a full build run sequentially. If any of these fail with new errors, Claude fixes them before moving on. Pre-existing warnings get ignored - only new issues matter.
npx tsc --noEmit
npm run lint
npm run build
This catches the stuff that’s embarrassing to have in a PR - type errors, lint violations, broken builds.
Step 2: Five review agents in parallel
After the mechanical checks pass, Claude launches five specialised review agents simultaneously:
- Code reviewer - bugs, logic errors, security vulnerabilities, code quality
- Silent failure hunter - inadequate error handling, empty catch blocks, swallowed errors
- Test analyser - test coverage gaps, missing edge cases for changed files
- TypeScript reviewer - type safety, patterns, TS-specific best practices
- JSDoc reviewer - missing documentation on public APIs, redundant comments
These are TypeScript-specific - swap in whatever’s relevant for your stack. A Python project might use a type checker agent, a Ruff linter agent, and a docstring reviewer. The principle is the same: narrow focus per agent, run them all in parallel.
Each agent has a narrow focus. The code reviewer doesn’t care about JSDoc, the test analyser doesn’t care about type patterns. This produces more targeted feedback than a single general-purpose review pass.
Running them in parallel means the total review time is the duration of the slowest agent rather than the sum of all five.
Step 3: Documentation review
After the agents report back, Claude checks whether the PR description accurately reflects the changes. It also reviews whether project documentation (README, CLAUDE.md) needs updating based on the diff.
If the PR introduces a new API endpoint, that should be in the README. If it changes a convention Claude should know about, that goes in CLAUDE.md. Claude makes these updates and commits them to the PR branch directly.
This extends beyond documentation. Claude Code integrates with project management tools like Linear via MCP servers. After a PR is merged, Claude can update the linked issue status, add a comment with a summary of what shipped, or move the ticket to done. The review pipeline becomes a full close-the-loop workflow - code reviewed, docs updated, project board current.
Why automate this
The obvious answer is speed. A manual review of types, lint, build, code quality, error handling, tests, TypeScript patterns, documentation, and JSDoc would take significant time. Running it automatically on every PR means it always happens.
The less obvious answer is consistency. When reviewing your own code you skip things. You wrote it, so you know it works, so you don’t check as carefully. Automated agents don’t have that bias. The silent failure hunter catches every empty catch block, not just the ones you happen to notice. Every PR gets the same scrutiny whether it’s a hotfix or a planned feature.
Building your own workflows
Both of these workflows are defined in a single CLAUDE.md file. The “start on:” convention is about 10 lines of instructions. The PR review pipeline is maybe 30 lines. There’s no plugin system, no extension API, no build step. You write what you want Claude to do in plain English and it does it.
This is what I meant in the previous post about self-modifying workflows. Automating a repetitive process is a paragraph of plain English in a CLAUDE.md file.