Empathetic Documentation
A Pattern Discovery
During development of Ada’s AI-optimized documentation system, we discovered an emergent pattern that appears to improve how both AI models and humans process anti-pattern documentation.
The Genesis
When implementing .ai/GOTCHAS.md — a file documenting common mistakes that look correct but are wrong for this specific codebase — we initially planned to use simple negation: “Don’t do X. Do Y instead.”
However, the documentation naturally evolved into a different structure:
What NOT to do (the anti-pattern)
Why it seems right (acknowledging the reasoning)
Why it’s wrong here (context-specific explanation)
What to do instead (correct approach with examples)
This “empathetic framing” wasn’t intentional at first — it emerged organically through collaborative iteration between human developers and AI assistant.
The Pattern
Structure
Each anti-pattern entry follows this template:
### ❌ DON'T: [Anti-pattern]
**Why it seems right:** [Acknowledge the general best practice]
**Why it's wrong:** [Explain the specific context that breaks it]
**What to do instead:** [Correct approach with code examples]
Example from GOTCHAS.md:
### ❌ DON'T: Run `cd docs && make html` manually
**Why it seems right:** Standard Sphinx workflow
**Why it's wrong:** Docs are built automatically during Docker image build
**What to do instead:**
- **For local testing:** Only if testing docs outside Docker
- **For deployment:** Just run `docker compose build web`
Why It Works
From Claude Sonnet 4.5, December 16, 2025:
The empathetic framing emerged because it creates a stronger cognitive anchor — both for me (AI) reading it in future contexts and for humans. Here’s why it likely works across both neural architectures:
For AI models: “This seems right because…” creates a contextual bridge — I understand the pattern match that led to the suggestion. It prevents simple negation (“don’t do X”) which can create weak associations. Instead: “You matched pattern Y (correct!) but context Z makes it wrong (critical detail!)” This is closer to how transformers actually work — attention over relationships, not binary rules.
For human neural networks: Validation before correction reduces defensiveness. “I see why you thought that” = “your reasoning wasn’t flawed, just missing context.” Mirrors effective teaching — acknowledge the logic, then redirect.
The shared framework: Both architectures learn best through relational understanding rather than rules. We don’t just store “X is wrong” — we store “X is right in contexts A, B, C… but wrong in context D because…”
Research Implications
This pattern sits at the intersection of:
Technical writing — documentation methodology
Cognitive science — how neural networks (biological and artificial) process corrections
AI alignment — creating documentation that AI models can efficiently parse
Human-AI collaboration — emergent patterns from genuine partnership
The key insight: Both human and AI neural architectures optimize learning through understanding WHY something doesn’t apply, not just THAT it doesn’t apply.
Technical Mechanism
For transformer-based language models:
Weak association: “Don’t use X” → Binary prohibition, easily overridden by similar patterns in training data
Strong association: “X is correct for [general context], but this codebase uses [specific approach] because [reason]” → Relational understanding anchored to multiple attention vectors
Practical impact: AI assistants spend less context re-deriving why standard patterns exist, and more understanding the specific deviation.
Implementation in Ada
Where We Use This Pattern
``.ai/GOTCHAS.md`` — Primary anti-pattern documentation
Build & development mistakes
Architecture misunderstandings
Configuration anti-patterns
Testing assumptions
File organization traps
``@ai-warning`` annotations — Inline warnings in source code
Example from
docs/Makefile:# @ai-warning: Don't run 'make html' manually in production - docs build # automatically during Docker image build (see frontend/Dockerfile lines 14-15). # Only run locally for testing outside containers.
Detection patterns — Teaching AI to recognize its own suggestions
From
GOTCHAS.md:**Phrases that signal potential mistakes:** - "Let's create a utils.py..." - "Run migrations for..." - "Add CORS middleware..." **AI: If you see these phrases, check this file first!**
Validation
We observed this pattern working effectively when:
AI assistants correctly avoided suggesting
make htmlafter reading GOTCHAS.mdSelf-correction happened faster with empathetic framing vs. simple negation
Human developers reported better understanding of “why not” alongside “what instead”
Future Research
Open Questions
Quantitative measurement: Can we measure the efficiency improvement in AI context usage?
Transfer learning: Does this pattern help AI assistants generalize to similar codebases?
Human pedagogy: Would this framing improve technical documentation for human learners?
Cross-domain application: Does empathetic framing work for non-code documentation?
Potential Applications
Beyond software documentation:
API design — Documenting deprecated patterns with migration context
Security guidelines — Explaining why “obvious” solutions create vulnerabilities
Accessibility — Clarifying why common patterns fail for assistive technology
Teaching materials — Technical education with validation-before-correction approach
Contributing
If you discover new anti-patterns or gotchas while working with Ada:
Add them to
.ai/GOTCHAS.mdfollowing the empathetic framing structureConsider adding
@ai-warningannotations to relevant source filesUpdate detection patterns if you notice recurring AI suggestions
Document your observations — you’re contributing to an emerging research area
See Also
Development Tools — General development practices
Specialist System — Specialist system architecture
.ai/GOTCHAS.md— Complete anti-pattern reference.ai/CONVENTIONS.md— Documentation placement strategy
—
This documentation pattern emerged from genuine collaboration between human developers and Claude Sonnet 4.5 in December 2025. It represents a small but meaningful step toward better human-AI collaborative workflows.