Vercel Labs released react-best-practices, a set of 40+ performance rules for React and Next.js, packaged as an installable agent skill. It’s designed to be read by AI coding tools - Claude Code, Cursor, Copilot - so they can catch performance problems while writing code rather than after.
What’s in it
The rules are split into eight categories, ordered by real-world impact:
- Async waterfall elimination - sequential
awaitcalls that should be parallel - Bundle size - barrel file re-exports, unnecessary client-side dependencies
- Server-side performance - missing caching, redundant data fetching
- Client-side data fetching - SWR/React Query misuse, over-fetching
- Re-render optimisation - missing memoisation, unstable references
- Rendering performance - layout thrashing, expensive computations in render paths
- Advanced patterns - streaming, partial prerendering, route segment config
- JavaScript performance - unnecessary closures, inefficient iteration
Each rule has a severity rating from CRITICAL down to LOW, with before/after code examples. The ordering matters - waterfalls and bundle size come first because they have the biggest impact. Re-render micro-optimisations come last.
How it works
The repo compiles all rules into an AGENTS.md file. When you install the skill, your AI coding tool loads this file as context. It’s the same mechanism as CLAUDE.md or .cursorrules - a document the model reads before writing code.
Install it with:
npx skills add vercel-labs/agent-skills
Or just drop the AGENTS.md file into your project root manually.
Why this matters
Most React performance work happens after the fact. Something is slow, you profile it, you fix it. These rules encode the patterns that experienced developers already know to avoid, the things you’d catch in code review if you had time to review every line.
Packaging them as an agent skill means the AI catches them at write time. No profiling, no review cycle. The model sees you writing a sequential waterfall and rewrites it as parallel before you’ve finished the function.
The interesting part is the format rather than the rules themselves, which are well-documented elsewhere. Structured markdown with severity ratings and code examples is exactly what LLMs are good at following. The model can read the intent of the rule and propose the right fix instead of just flagging a violation, which is what a linter would do.
This is part of a broader pattern. Vercel’s agent-skills repo is a registry of installable skill packs, with React best practices as the first entry.