v2.3 is live on PyPI. This is the biggest architecture update since the BEAM system launched. Three features, one goal: let Mnemosyne remember more, grow slower, and tell you which memories came from you and which came from an agent's best guess.
pip install --upgrade mnemosyne-memory
Tiered Episodic Degradation
Memories now degrade automatically through three tiers. No manual pruning. No admin panel. Zero maintenance.
- Tier 1 (0 to 30 days): Full detail. Everything you said is preserved verbatim. 1.0x recall weight.
- Tier 2 (30 to 180 days): LLM-compressed summary. Still searchable, just leaner. 0.5x recall weight.
- Tier 3 (180+ days): Entity-extracted signal. Only the key facts survive. 0.25x recall weight. Still findable if the semantic match is strong enough.
The system runs degradation automatically during the sleep cycle. Old memories quietly move down a tier. You never have to think about it.
Config: set MNEMOSYNE_TIER2_DAYS and
MNEMOSYNE_TIER3_DAYS to
adjust the thresholds. Weight each tier with
MNEMOSYNE_TIER1_WEIGHT through
MNEMOSYNE_TIER3_WEIGHT.
Smart Compression
Tier 2 to Tier 3 degradation used to just grab the first 200 characters. If your memory had 5 sentences and the important one was at the end, it was gone.
The new _extract_key_signal()
function scores every sentence by entity density. Proper nouns, acronyms,
security terms, infrastructure terms, urgency words, and preference
indicators all boost a sentence's score. The highest-scoring sentences
survive. A password change buried in paragraph five stays. "Morning
standup was uneventful" gets dropped.
Enabled by default. Disable with
MNEMOSYNE_SMART_COMPRESS=0.
Adjust output size with
MNEMOSYNE_TIER3_MAX_CHARS=300.
Memory Confidence
Every memory now carries a veracity
field. Five levels:
- stated: you said it directly (1.0x weight)
- inferred: the agent deduced it (0.7x weight)
- tool: automated system injected it (0.5x weight)
- imported: came from another platform (0.6x weight)
- unknown: legacy or uncategorized (0.8x weight)
Set veracity when you write:
beam.remember("fact", veracity="stated").
Filter when you read:
beam.recall("query", veracity="stated").
There is also a new get_contaminated()
method. It returns everything that was not explicitly stated by you:
inferences, tool outputs, imports, and uncategorized data. Sorted by
importance. This is your review queue. Promote what was correct. Let the
rest fade.
All weights are configurable:
MNEMOSYNE_STATED_WEIGHT,
MNEMOSYNE_INFERRED_WEIGHT,
etc.
What else
- 51 beam tests, all passing
- Fixed a crash in the LLM degradation path (wrong function name)
- Fixed SQLite connection conflicts in batch tests
- Removed a hallucinated roadmap phase from the public repo
- Opened an upstream issue for tier visualization in the community dashboard
Full changelog at github.com/AxDSan/mnemosyne .
Built with AI assistance. Designed and shipped in a single day. 51 tests, zero failures.