The complete guide to Claude Code setup (Opus 4.6, Sonnet 4.6, Haiku 4.5). 1M token context window. 100+ hours saved. 25 hook events. Agent teams and task management. Production-tested patterns for skills, hooks, and MCP integration.
Published: March 2026
If you maintain multiple AI projects, each accumulates hard-won production knowledge independently. Project A discovers a circuit breaker pattern after a 3-hour outage. Project B hits the same issue two months later and loses another 3 hours. Project C never learns from either.
Claude Code’s rules, skills, hooks, and crons can solve this – but only if you design a shared knowledge layer that grows automatically across sessions and projects.
This chapter shows how to build a self-sustaining “AI DNA” system using every Claude Code ability. The architecture is generic – adapt the content to your own AI stack.
The system has 6 layers, each using a different Claude Code capability:
Layer 1: Shared Rules (~/.claude/rules/ai/) Always loaded, every session
Layer 2: Shared Skills (~/.claude/skills/shared-*/) On-demand, triggered by keywords
Layer 3: Persistent Memory (Basic Memory MCP or files) Searchable knowledge graph
Layer 4: Project Context (CLAUDE.md + MEMORY.md) Per-project pointers
Layer 5: Automation (hooks + crons) Zero-manual-effort capture
Layer 6: Lifecycle (freshness SLAs + review) Prevents knowledge rot
Each layer serves a different purpose with different token costs:
| Layer | Token Cost | When Loaded | Best For |
|---|---|---|---|
| Rules | ~500 tokens/rule | Every message | Universal patterns (always need) |
| Skills | ~40 tokens (description only) | When triggered | Detailed guides (need sometimes) |
| Memory | 0 (on-demand search) | When searched | Historical decisions, gotchas |
| Context | ~200 tokens | Every message | Project-specific pointers |
| Hooks | 0 (external process) | On events | Automatic capture |
| Lifecycle | 0 (cron job) | Weekly/monthly | Staleness prevention |
Location: ~/.claude/rules/ai/
Cost: Always loaded (~500 tokens per rule)
Use for: Universal patterns that apply to ALL your AI projects
Create a dedicated ai/ subdirectory in your global rules:
~/.claude/rules/ai/
├── core-patterns.md # Framework-specific patterns
├── model-optimization.md # Model selection, caching, cost reduction
├── resilience.md # Circuit breaker, fallback, retry
├── orchestration.md # Multi-agent coordination patterns
├── observability.md # Logging, baselines, cost tracking
└── methodology.md # The DNA system's own rules
Each rule should be concise (60-120 lines) with clear scope:
# [Domain] Patterns -- Universal
**Scope**: ALL projects using [framework/tool]
**Authority**: MANDATORY for [what it enforces]
**Source**: [Which projects contributed these patterns]
---
## Pattern Name
[2-3 sentence description]
## Code Example
[Minimal working pattern]
## Common Gotchas
| Gotcha | Fix |
|--------|-----|
| [trap] | [solution] |
---
**Last Updated**: YYYY-MM-DD
.claude/rules/Global rules consume ~500 tokens each. Track your budget:
# Count total global rules
find ~/.claude/rules -name "*.md" | wc -l
# Rough token estimate (5 tokens/line)
find ~/.claude/rules -name "*.md" -exec wc -l {} + | tail -1
Stay under 85% of the skill budget (~16K chars for descriptions). Use paths: frontmatter for rules that only apply to specific file types.
Location: ~/.claude/skills/shared-*/
Cost: ~40 tokens per skill (description only, full content on-demand)
Use for: Detailed guides that are too long for rules
| Content | Use |
|---|---|
| Universal pattern, always needed | Rule (always loaded) |
| Detailed guide, needed sometimes | Skill (on-demand) |
| Step-by-step workflow | Skill (on-demand) |
| One-liner enforcement | Rule (always loaded) |
---
name: shared-[domain]
description: "[Action verb] [what it does]. Use when [trigger scenarios]."
---
# [Domain] Guide
**Source**: [Which projects contributed]
## When to Use
1. When building [scenario 1]
2. When debugging [scenario 2]
3. When deciding between [option A] vs [option B]
## Decision Tree
| Need | Approach | Why |
|------|----------|-----|
| [scenario] | [solution] | [rationale] |
## Common Gotchas
[Project-validated traps and fixes]
## Key Files Reference
[Point to relevant code locations per project]
shared-[domain]/SKILL.md (e.g., shared-rag-architecture/)-skill suffix on directory namesname: field must match directory name exactlyLocation: Dedicated folder in your knowledge system (e.g., Basic Memory MCP, or plain files) Cost: Zero tokens until searched Use for: Architecture decisions, production gotchas, model selection history
| Note Type | Content | Example |
|---|---|---|
| Architecture Decisions | ADRs with status lifecycle | “Why we chose framework X over Y” |
| Cross-Project Patterns | Patterns validated in 2+ projects | “Budget enforcement works in projects A and B” |
| Production Gotchas | Bugs, traps, and fixes from all projects | “Model X returns empty on Hebrew input” |
| Model Selection History | What models you tried, results, costs | “Flash+pipeline beats Pro standalone” |
| Knowledge Growth Log | Track new learnings over time | Weekly entries of discoveries |
Every architecture decision should have a status:
PROPOSED → ACCEPTED → ACTIVE → DEPRECATED → SUPERSEDED
This prevents acting on outdated decisions. When a pattern is replaced, mark it DEPRECATED with a pointer to the replacement.
Track discoveries as they happen:
| Date | Project | Domain | Pattern | Shareable? |
|------|---------|--------|---------|------------|
| 2026-03-30 | ProjectA | Resilience | Circuit breaker 120s cooldown optimal | YES |
| 2026-03-30 | ProjectB | RAG | kNN K=20 outperforms K=10 | YES |
This log feeds the weekly consolidation (Layer 5).
Location: Each project’s CLAUDE.md and MEMORY.md
Cost: ~200 tokens per project
Use for: Pointers from each project to the shared layer
Add a section to each project’s CLAUDE.md:
## Shared AI Knowledge (Cross-Project)
This project shares universal AI patterns with [other projects]:
- **Rules**: `~/.claude/rules/ai/` (N files)
- **Skills**: `~/.claude/skills/shared-*/` (N skills)
- **Memory**: [path to shared knowledge notes]
Add a section to each project’s auto-memory:
## Shared AI Knowledge
- Rules at ~/.claude/rules/ai/ provide universal patterns
- Skills at ~/.claude/skills/shared-*/ provide detailed guides
- Knowledge notes at [path] provide decisions and gotchas
Cost: Zero tokens (hooks and crons run externally) Use for: Automatic capture and maintenance with zero manual effort
If you use a research MCP (like Perplexity), auto-save results to your knowledge system:
#!/usr/bin/env bash
# PostToolUse hook — saves research results automatically
# Matcher: mcp__perplexity__search
QUERY=$(echo "$1" | jq -r '.tool_input.query // "unknown"')
RESULT=$(echo "$1" | jq -r '.tool_result.content // ""')
# Skip if empty
[[ -z "$RESULT" || "$RESULT" == "null" ]] && exit 0
# Generate filename from query
SLUG=$(echo "$QUERY" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/-/g' | head -c 60)
CACHE_FILE="$HOME/knowledge/research-cache/${SLUG}.md"
# Dedup — skip if already cached
[[ -f "$CACHE_FILE" ]] && exit 0
# Save with frontmatter
cat > "$CACHE_FILE" <<EOF
---
title: $QUERY
date: $(date +%Y-%m-%d)
tags: [research, auto-cached]
---
$RESULT
EOF
Save critical context before compaction so you can recover:
#!/usr/bin/env bash
# PreCompact hook — lists loaded AI rules for post-compact reference
echo "### AI Rules Loaded (for post-compact reload)"
for f in ~/.claude/rules/ai/*.md; do
[[ -f "$f" ]] && echo "- $(basename "$f")"
done
Load shared knowledge status on session start:
#!/usr/bin/env bash
# SessionStart hook — check for stale knowledge
STALE=$(find ~/knowledge/ai/ -name "*.md" -mtime +60 2>/dev/null | wc -l)
if [[ $STALE -gt 0 ]]; then
echo "### Warning: $STALE AI knowledge notes older than 60 days"
fi
Review the growth log and check freshness:
#!/usr/bin/env bash
# Weekly cron (e.g., Sunday 3 AM)
# Checks knowledge freshness against SLAs
REPORT="$HOME/.claude/logs/consolidation-$(date +%Y-%m-%d).md"
echo "# Weekly Knowledge Consolidation" > "$REPORT"
echo "> Date: $(date +%Y-%m-%d)" >> "$REPORT"
# Check for stale notes
for dir in decisions investigations research-cache; do
MAX_DAYS=90 # Adjust per note type
STALE=$(find "$HOME/knowledge/$dir" -name "*.md" -mtime +$MAX_DAYS 2>/dev/null | wc -l)
TOTAL=$(find "$HOME/knowledge/$dir" -name "*.md" 2>/dev/null | wc -l)
echo "- **$dir**: $STALE/$TOTAL stale (SLA: ${MAX_DAYS}d)" >> "$REPORT"
done
A smoke test script that verifies the entire system:
#!/usr/bin/env bash
# Monthly cron (1st of month, 5 AM)
echo "=== AI Knowledge Smoke Test ==="
echo "1. Rules: $(ls ~/.claude/rules/ai/*.md 2>/dev/null | wc -l) files"
echo "2. Skills: $(ls -d ~/.claude/skills/shared-*/ 2>/dev/null | wc -l) dirs"
echo "3. Knowledge notes: $(ls ~/knowledge/ai/*.md 2>/dev/null | wc -l) files"
BUDGET=$(find ~/.claude/skills -name SKILL.md -exec grep -m1 'description:' {} \; | wc -c)
echo "4. Skill budget: $((BUDGET * 100 / 16000))%"
Use for: Preventing knowledge rot over time
Different note types decay at different rates:
| Note Type | Max Age (no edits) | Action When Stale |
|---|---|---|
| Decision | 90 days | Review: still valid? |
| Investigation | 60 days | Archive unless referenced |
| Log | 30 days | Auto-archive |
| Research cache | 90 days | Re-search if technology changed |
When a pattern appears in 2+ projects, promote it:
Project discovers pattern
→ Capture in growth log
→ Weekly consolidation flags cross-project patterns
→ Review: Is it universal?
→ YES: Promote to ~/.claude/rules/ai/ or skills/shared-*/
→ NO: Keep in project-level rules
/document IntegrationIf you use a documentation command, add two checks:
~/.claude/rules/ai/ with 3-7 universal rules extracted from your projects~/.claude/skills/shared-*/ with 2-4 detailed guidesOnce set up, the system is self-sustaining:
Monitor your budget monthly:
# Skills budget (descriptions only)
find ~/.claude/skills -name SKILL.md -exec grep -m1 'description:' {} \; | wc -c
# Target: under 13,600 chars (85% of 16K)
# Rules count
find ~/.claude/rules -name "*.md" | wc -l
# Each rule costs ~500 tokens in every message
| Pitfall | Fix |
|---|---|
| Rules too long (>120 lines) | Split into rule (short) + skill (detailed) |
| Rules too project-specific | Keep in project .claude/rules/, not global |
| Knowledge never gets reviewed | Weekly cron + freshness SLAs |
| Growth log stays empty | Check hooks are firing (test with simulated input) |
| Token budget overflow | Archive low-value skills, use paths: on domain rules |
| Same pattern rediscovered | Promote to shared rule after 2nd occurrence |
After creating shared rules, your project-level rules may overlap. Audit and remove duplicates:
# Find potential overlaps between project rules and global AI rules
for f in .claude/rules/*.md; do
TOPIC=$(head -1 "$f" | sed 's/^# //')
MATCH=$(grep -rl "$TOPIC" ~/.claude/rules/ai/ 2>/dev/null)
[[ -n "$MATCH" ]] && echo "OVERLAP: $(basename $f) ↔ $(basename $MATCH)"
done
For each overlap:
| What | Where | Cost | Automatic? |
|---|---|---|---|
| Universal patterns | ~/.claude/rules/ai/ |
~500 tokens/rule | Always loaded |
| Detailed guides | ~/.claude/skills/shared-*/ |
~40 tokens (desc) | Keyword triggered |
| Decisions & gotchas | Knowledge notes | 0 (on-demand) | Searchable |
| Project pointers | CLAUDE.md + MEMORY.md | ~200 tokens | Always loaded |
| Research capture | PostToolUse hook | 0 | Automatic |
| Freshness checks | Weekly/monthly crons | 0 | Automatic |
The key insight: every layer has a different token cost and loading strategy. Rules are expensive but always available. Skills are cheap but on-demand. Memory is free but requires search. Hooks and crons are invisible. Use each layer for what it does best.