Every AI agent faces the same existential problem: sessions end, context windows reset, and everything learned vanishes. Memory systems solve this — but the right choice depends on what you need to remember and how you need to retrieve it.
The simplest approach: a text file the agent reads at startup and writes to during sessions.
# MEMORY.md
- Серега prefers Slack over Telegram
- Database ID: 2475fe37-2ab0-48a3-972d-a07a2753cb94
- Last deployed: 2026-03-28
- Known issue: Apify actors reset to private periodically
Pros: Dead simple. Version-controllable. Human-readable. No infrastructure.
Cons: Linear search only. Doesn't scale past ~500 entries. No relationship tracking.
Embed memory entries as vectors, search by semantic similarity.
# Store a memory
embedding = embed("Apify actors reset to private periodically")
vector_db.upsert(id="mem_042", vector=embedding, metadata={...})
# Retrieve relevant memories
query = embed("why are my Apify scrapers not visible?")
results = vector_db.search(query, top_k=5)
# Returns: the Apify private reset memory + related entries
Pros: Semantic search (finds related memories even with different wording). Scales to millions of entries.
Cons: No structure. Can't traverse relationships. Embedding quality varies.
Store entities and relationships in a graph database.
# Entities: People, Projects, Tools, Decisions
# Relationships: USES, DECIDED, BLOCKED_BY, RELATED_TO
MATCH (p:Person {name: "Серега"})-[:DECIDED]->(d:Decision)
WHERE d.topic CONTAINS "Apify"
RETURN d.description, d.reason, d.date
Pros: Relationship traversal. "What's connected to X?" queries. Decision audit trail.
Cons: Infrastructure overhead (Neo4j). Ingestion pipeline needed. More complex to maintain.
Combine approaches: flat files for quick access, vector search for retrieval, graph for relationships.
Retrieval flow:
1. Check MEMORY.md for exact matches (fast, deterministic)
2. Vector search for semantic matches (finds related context)
3. Graph query for relationships (traverses connections)
4. Merge results, rank by relevance
Memory systems that only add never remove will eventually drown in noise. Implement:
Start with flat files (MEMORY.md + daily notes). Add vector search when you have 100+ memories. Add a knowledge graph when you need relationship traversal. Don't over-engineer from day one — you'll know when you need the next level because retrieval quality will degrade.