Compound Engineering

The Compound Loop

Instructor10 min

The problem with one-shot AI

Most developers use AI coding tools in isolation. They ask a question, get an answer, and move on. Next session, the AI starts from zero. Every conversation is a blank slate.

This is like hiring a brilliant contractor who gets amnesia every evening. They do great work - but they never learn your codebase, your conventions, your past mistakes.

Compound engineering fixes this.

The five-step loop

Every meaningful coding session should follow this loop:

PLAN > DEEPEN > WORK > REVIEW > CODIFY

Each step feeds the next. The final step - CODIFY - feeds back into the beginning, making every future session smarter.

1. Plan

Define what you're building before you touch code. This can be a full PRD for complex features or Claude Code's built-in plan mode for smaller tasks.

The plan captures: what you're building, why, acceptance criteria, and constraints. It becomes the contract between you and the AI.

# Simple: use plan mode
claude "add rate limiting to the /api/submit endpoint"
# Claude enters plan mode, proposes approach, waits for approval
 
# Complex: write a PRD first
claude "/create-prd rate-limiting"
# Generates a structured PRD with requirements, edge cases, test plan

2. Deepen

Before writing code, research the unknowns. What APIs will you call? What patterns exist in the codebase already? What edge cases could bite you?

This step uses parallel research - Claude can search docs, read related files, and check dependencies simultaneously.

claude "/deepen-plan"
# Researches dependencies, API docs, existing patterns in parallel
# Updates the plan with concrete implementation details

3. Work

Execute the implementation against clear acceptance criteria from your plan. This is where code gets written, tests get added, and features take shape.

claude "/work"
# Reads the PRD, implements against acceptance criteria
# Runs tests, fixes errors, iterates until criteria are met

4. Review

Don't just eyeball the diff. Run a structured, multi-perspective review. Security, performance, correctness, maintainability - each lens catches different issues.

claude "/review"
# Runs parallel review: security, performance, correctness, style
# Outputs findings with severity levels (P1/P2/P3)

5. Codify

This is the step most people skip - and it's the most important. After every session, capture what you learned. Update documentation. Add patterns to your CLAUDE.md. Record solutions to problems you solved.

claude "/codify"
# Extracts learnings from the session
# Updates docs/solutions/, patterns/, CLAUDE.md
# Next session starts smarter
Compound engineering means your AI gets better at YOUR codebase over time

Each codify step deposits knowledge that future sessions withdraw. A pattern documented today prevents a bug tomorrow. A solution indexed this week saves an hour next month. The compound effect is real: teams using this loop report that their AI assistant feels like it "knows" their project after a few weeks.

How compounding works in practice

The knowledge from CODIFY flows back into PLAN through three mechanisms:

1. Solutions index - A searchable record of problems you've solved. Before tackling any issue, the system checks: "Have we seen this before?"

docs/solutions/learnings-index.md
- **2025-03-10**: Drizzle schema linter silently removes imports for
  tables not defined in schema. Always define Drizzle schema for ALL
  tables before writing route code.
- **2025-03-11**: `camelify()` converts keys but leaves values as strings.
  Drizzle timestamp columns crash on string dates. Coerce to `new Date()`
  after camelify.

2. Pattern docs - Reusable approaches to recurring problems, stored where Claude reads them automatically.

3. CLAUDE.md updates - Project rules and conventions that Claude follows in every session. When you discover "always use escapeLike() with ilike() queries," that becomes a rule, not tribal knowledge.

A real example

Here's the loop applied to building a feedback system:

StepWhat happened
PlanPRD defined: collect user feedback via form, notify team on Telegram, store in DB
DeepenResearched Telegram bot API, checked existing DB schema, found similar form patterns
WorkBuilt the feature: API route, form component, Telegram integration, DB migration
ReviewFound 3 issues: missing auth on API route, SQL injection via LIKE wildcards, Telegram markdown breaking on special chars
CodifyDocumented all 3 fixes. Added escapeLike() utility pattern. Updated CLAUDE.md with API auth checklist

Next time someone builds an API route, the review catches auth issues automatically. Next time someone uses ilike(), Claude knows to use escapeLike(). The codebase got smarter.

The compound effect over time

Session 1: Claude knows nothing about your project. You explain everything.

Session 10: Claude follows your conventions, avoids known pitfalls, and suggests approaches that match your architecture.

Session 50: Claude catches anti-patterns before you write them, references past solutions to similar problems, and produces code that reads like a team member wrote it.

This doesn't happen by magic. It happens because you codified knowledge at every step.

Walk through a real compound loop on screen:

  1. Show a CLAUDE.md with learnings already captured from past sessions
  2. Start a new feature - point out how Claude references existing patterns
  3. Run /review and show the multi-perspective output
  4. Run /codify and show the new entries appearing in docs/solutions/

The key moment is when Claude references a past learning to avoid a mistake. That's when the audience sees the compound effect click.

Ask the audience: "How many of you have solved the same bug twice because you forgot the fix?" That's the problem codify solves.

Start small

You don't need all five steps on day one. Start with just WORK + CODIFY. After each session, spend 2 minutes capturing what you learned in your CLAUDE.md. That alone creates compounding. Add the other steps as they become natural.