This repository demonstrates a real failure mode in Claude: decision drift.
Claude can repeatedly re-decide the same architectural questions across sessions, even when prior decisions exist in a repository. This project documents that failure — and shows how enforcing external decision memory stabilizes behavior.
This is not a prompt collection. This is not an agent framework.
It is a minimal, reproducible setup that proves:
- Claude does not reliably treat repositories as authoritative memory
- Decision stability requires explicit memory enforcement
When the repository (including memory/decisions.md) is added to a Claude Project, Claude does NOT reliably read or respect existing decisions.
In testing, Claude:
- Claimed no decision existed, even when decisions.md contained an accepted decision
- Re-proposed the same architectural choice
- Prioritized conversational context over repository state
This demonstrates that repository presence alone is insufficient to prevent decision drift.
To stabilize behavior, decisions must be explicitly enforced.
Instead of assuming Claude will read repository state, the decision memory is injected into the conversation and treated as authoritative.
When the decision memory is explicitly provided:
- Claude stops re-deciding the same architectural questions
- Existing decisions are summarized instead of replaced
- Rationale is preserved across sessions
This enforcement step is minimal but mandatory.
With explicit decision memory enforcement, Claude consistently references previously accepted decisions instead of generating new ones.
This confirms:
- Decision drift is a real and reproducible failure mode
- Repository presence alone is insufficient
- Explicit memory enforcement restores decision stability
The system does not make Claude smarter. It makes Claude accountable.