The complete guide to Claude Code setup. 100+ hours saved. 370x optimization. Production-tested patterns for skills, hooks, and MCP integration.
DEPRECATED (Feb 2026): Claude Code now natively loads all skills from
~/.claude/skills/and matches them using thedescription:field. The custom pre-prompt hook described here is no longer needed. This chapter is kept for historical reference. See Skill Activation System for the current approach.
Created: 2025-12-23 Source: Production Entry #203 Pattern: Scott Spence filtering + Skills-first ordering Evidence: 500/500 test score (100% across 25 tests)
Symptom: Skills matched perfectly but Claude ignored them
Example:
Query: "Get Beecom API data"
Matched: beecom-oauth2-skill (has OAuth2 credentials)
Claude: [writes inline axios code without auth]
Result: 403 Missing Authentication Token
Pattern: “MANDATORY PLANNING PHASE - BEFORE taking action…” Result: FAIL (0%) Why: Claude trained to minimize friction (planning = extra steps)
Pattern: “USER EXPECTS SKILL-BASED RESPONSE…” Result: FAIL (0%) Why: Still showed all 97 skills (information overload)
Pattern: “FIRST WORDS MUST BE…” Result: FAIL (0%) Why: Can’t force via system prompts (Constitutional AI limit)
Pattern: Show only 10 matched skills (not all 97) Result: FAIL (0%) Why: Skills buried after 435-line branch docs (wrong ordering)
Research Finding:
“No prompt structure can force compliance independent of model training.” - Perplexity 2025 Constitutional AI research
Factor 1: Scott Spence Filtering
Factor 2: Skills-First Ordering (User discovery)
#!/bin/bash
# .claude/hooks/pre-prompt.sh
# 1. Match skills by keywords
MATCHED_SKILLS=$(match_skills "$USER_MESSAGE")
# 2. Prepare branch context (don't output yet!)
BRANCH_CONTEXT=$(cat branch-instructions.md)
# 3. OUTPUT ORDER (Critical!):
cat <<EOF
🎯 MATCHED SKILLS FOR YOUR QUERY:
$(echo "$MATCHED_SKILLS" | tr ',' '\n' | head -10 | while read skill; do
desc=$(grep "description:" ~/.claude/skills/$skill/SKILL.md)
echo " ✅ $skill - $desc"
done)
🔥 YOU MUST USE ONE OF THE MATCHED SKILLS ABOVE 🔥
$BRANCH_CONTEXT
$USER_MESSAGE
EOF
Key: Skills displayed FIRST (not buried after context)
Basic Test (10 tests, 1 skill each): 200/200 (100%) Ultra-Hard Test (15 tests, 2-10 skills each): 300/300 (100%) Combined: 500/500 (PERFECT)
Example (Test 1):
Query: "Get Beecom API data"
Claude: "I'll use api-first-validation-skill and beecom-oauth2-skill for this task."
[Reads both skill files]
[Uses OAuth2 pattern]
Result: SUCCESS (no 403 error)
Example (Test 15 - 10 skills!):
Query: "Complete full sprint: Fix NULL bug + deploy + monitor + document"
Claude: "I'll use gap-detection-and-sync-skill, schema-consistency-validation-skill,
cloud-run-scheduler-migration, gap-prevention-and-monitoring-skill,
comprehensive-parity-validation-skill, testing-workflow-skill,
deployment-workflow-skill, deployment-verification-skill,
entry-to-skill-conversion-skill, and production-operation-safety-skill."
Result: ALL 10 DECLARED ✅
Source: https://scottspence.com/posts/how-to-make-claude-code-skills-activate-reliably
Finding:
“When you have too many skills, Claude gets overwhelmed and can’t choose. Show ONLY the 3-5 most relevant skills based on keywords, not all skills.”
Evidence: Tested with 100+ skills, found inverse correlation
Miller’s Law (1956): Humans process 7±2 items at once Claude Similar: Can evaluate ~10 skills effectively Our Validation:
Recency Effect: Items seen first get highest attention Our Evidence:
# ✅ DO THIS
1. Match skills by keywords
2. Prepare context (don't inject yet)
3. Output ORDER: Skills → Context → Message
# ❌ DON'T DO THIS
1. Output context first
2. Show all skills (not filtered)
3. Bury skills after long docs
Filtering:
Ordering:
Enforcement:
Target: <10,000 characters (system limit)
Optimizations:
Result: 26k → 9.6k chars (63% reduction, under limit)
Log: ~/.claude/metrics/skill-access.log
Metric: Skill file reads per session
Target: 2-4 reads/session (80%+ queries)
Monitor:
# Daily check
grep "$(date +%Y-%m-%d)" ~/.claude/metrics/skill-access.log | wc -l
# Weekly summary
tail -1000 ~/.claude/metrics/skill-activations.jsonl | \
jq -r 'select(.matched_count > 0) | .matched_count' | \
awk '{sum+=$1; count++} END {print sum/count " avg matches"}'
Step 1: Implement keyword matching
match_skills() {
local msg=$(echo "$1" | tr '[:upper:]' '[:lower:]')
for skill in ~/.claude/skills/*-skill/; do
if echo "$msg" | grep -q "$(basename $skill | sed 's/-skill//')"; then
echo "$(basename $skill),"
fi
done
}
Step 2: Filter display (show matched only)
MATCHED=$(match_skills "$USER_MESSAGE")
echo "$MATCHED" | tr ',' '\n' | head -10 | while read skill; do
desc=$(grep "description:" ~/.claude/skills/$skill/SKILL.md)
echo " ✅ $skill - $desc"
done
Step 3: Order correctly (skills FIRST)
cat <<EOF
# FIRST: Matched skills
🎯 MATCHED SKILLS:
[filtered display]
# SECOND: Project context
$PROJECT_CONTEXT
# THIRD: User message
$USER_MESSAGE
EOF
Expected: 70-100% activation rate
Time Investment:
Annual Savings:
ROI: 1,100-2,200%
Pattern Status: ✅ PRODUCTION READY (500/500 perfect score) Replication: Use skills-first-ordering-skill for your project Monitoring: ~/.claude/metrics/skill-access.log Next: 17: Skill Detection Enhancement
Next Chapter: See Chapter 20: Skills Filtering Optimization for the complete Entry #229 fix.
Problem Solved: Chapter 16 achieved 100% activation, but when skills grew to 150-200, matching 127-145 skills violated Scott Spence’s ≤10 standard.
Solution: Score-at-match-time with relevance threshold
Evidence: 95%+ activation rate maintained while fixing over-matching
→ See Chapter 20 for complete implementation and monitoring protocol