Headroom Learn¶
Offline failure learning for coding agents. Analyzes past conversations, finds what went wrong, correlates it with what eventually worked, and writes specific project-level learnings that prevent the same mistakes next session.
Quick Start¶
# See recommendations for current project (dry-run, no changes)
headroom learn
# Write recommendations to CLAUDE.md and MEMORY.md
headroom learn --apply
# Analyze a specific project
headroom learn --project ~/my-project --apply
# Analyze all projects
headroom learn --all --apply
How It Works¶
Past Sessions → Scanner → Analyzer → Writer → CLAUDE.md / MEMORY.md
│ │ │
│ │ └─ Writes marker-delimited sections
│ │ (replaced on re-run, not duplicated)
│ │
│ └─ Success Correlation: for each failure,
│ finds what succeeded and extracts the diff
│
└─ Reads ~/.claude/projects/*.jsonl
(extensible to Cursor, Codex, etc.)
Success Correlation¶
The core innovation. Instead of cataloging failures ("Read failed 5 times"), Headroom finds what the model did to fix each failure:
- Failed:
Read axion-formats/src/main/java/.../FirstClassEntity.java - Then succeeded:
Read axion-scala-common/src/main/scala/.../FirstClassEntity.scala - Learning: "
FirstClassEntityis ataxion-scala-common/, notaxion-formats/"
This produces specific, actionable corrections — not generic advice.
What It Learns¶
1. Environment Facts → CLAUDE.md¶
Which runtime commands work vs fail.
### Environment
- **Python**: use `uv run python` (not `python3` — modules not available outside venv)
2. File Path Corrections → CLAUDE.md¶
Wrong paths the model keeps guessing, with the correct locations.
### File Path Corrections
- `axion-common/src/.../AxionSparkConstants.scala`
→ actually at `axion-spark-common/src/.../AxionSparkConstants.scala`
3. Search Scope → CLAUDE.md¶
Which directories to search in (narrow paths fail, broader ones work).
4. Command Patterns → CLAUDE.md¶
How commands should (and shouldn't) be run.
### Command Patterns
- **user_prefers_manual**: User rejected gradle 18 times — show the command, don't execute
- **python_runtime**: Use `uv run python` not `python3` (ModuleNotFoundError)
5. Known Large Files → CLAUDE.md¶
Files that need offset/limit with Read.
6. Retry Prevention → MEMORY.md¶
Specific suggestions derived from actual corrections.
7. Permission Notes → MEMORY.md¶
Commands repeatedly rejected — model should suggest them to the user instead.
Where Learnings Go¶
| Pattern | Destination | Why |
|---|---|---|
| Environment, paths, search scope, commands, large files | CLAUDE.md | Stable project facts, version-controllable |
| Missing paths, retry patterns, permissions | MEMORY.md | May change, agent-specific |
CLAUDE.md lives in your project directory. MEMORY.md lives in ~/.claude/projects/*/memory/.
Marker-Based Updates¶
Headroom manages a clearly-delimited section in each file:
<!-- headroom:learn:start -->
## Headroom Learned Patterns
*Auto-generated by `headroom learn` — do not edit manually*
...
<!-- headroom:learn:end -->
On re-run, only the content between markers is replaced. Your existing file content is preserved.
Architecture¶
Scanner (adapter) → Analyzer (generic) → Writer (adapter)
├── ClaudeCodeScanner ├── EnvironmentAnalyzer ├── ClaudeCodeWriter
├── (CursorScanner) ├── StructureAnalyzer ├── (CursorWriter)
└── (GenericScanner) ├── CommandAnalyzer └── (GenericWriter)
├── RetryAnalyzer
└── CrossSessionAnalyzer
Scanners read tool-specific log formats and produce normalized ToolCall sequences.
Analyzers work on ToolCall — same analysis for any agent system.
Writers output to tool-specific context injection mechanisms.
To add support for a new agent (e.g., Cursor):
1. Write CursorScanner(ConversationScanner) — reads Cursor's log format
2. Write CursorWriter(ContextWriter) — writes to .cursorrules
3. Same analyzers, same models, same recommendations
CLI Reference¶
headroom learn [OPTIONS]
Options:
--project PATH Project directory to analyze (default: current directory)
--all Analyze all discovered projects
--apply Write recommendations (default: dry-run)
--claude-dir PATH Path to .claude directory (default: ~/.claude)
Real-World Results¶
Tested on 67,583 tool calls across 23 projects:
| Metric | Value |
|---|---|
| Failure rate | 7.5% (5,066 failures) |
| Corrections extracted | 164 per project (avg) |
| Specific path corrections | 22 (axion project) |
| Search scope corrections | 24 (axion project) |
| Command patterns learned | 5 (axion project) |
| Estimated preventable waste | ~27 MB across corpus |