1.3 KiB
1.3 KiB
Memory Backend Evaluation: Manticore vs Alternatives
Decision Summary
- Recommended now: Manticore for indexed text retrieval and future vector layering.
- Default fallback: Django/ORM backend for zero-infra environments.
- Revisit later: dedicated vector DB only if recall quality or ANN latency requires it.
Why Manticore Fits This Stage
- Already present in adjacent infra and codebase history.
- Runs well as a small standalone container with low operational complexity.
- Supports SQL-like querying and fast full-text retrieval for agent memory/wiki content.
- Lets us keep one retrieval abstraction while deferring embedding complexity.
Tradeoff Notes
- Manticore-first gives immediate performance over markdown scans.
- For advanced ANN/vector-only workloads, Qdrant/pgvector/Weaviate may outperform with less custom shaping.
- A hybrid approach remains possible:
- Manticore for lexical + metadata filtering,
- optional vector store for semantic recall.
Practical Rollout
- Start with
MEMORY_SEARCH_BACKEND=djangoand verify API/command workflows. - Start Manticore container and switch to
MEMORY_SEARCH_BACKEND=manticore. - Run reindex and validate query latency/quality on real agent workflows.
- Add embedding pipeline only after baseline lexical retrieval is stable.