Memori is an open-source SQL-native memory engine designed for LLMs, AI agents, and multi-agent systems, providing persistent, queryable memory using standard SQL databases with a single line of code integration.
Memori revolutionizes how large language models (LLMs) handle memory by offering a seamless, open-source solution that integrates persistent storage directly into SQL databases. Unlike traditional approaches that rely on costly vector databases, Memori leverages standard SQL (SQLite, PostgreSQL, MySQL) for storing conversations, extracting entities, and maintaining context across sessions. This enables AI agents to 'remember' interactions, learn from user preferences, and provide more coherent responses without vendor lock-in.
memori.enable() to hook into any LLM framework, including OpenAI, Anthropic, LangChain, and LiteLLM. No complex setup required.Memori intercepts LLM calls transparently:
This architecture supports modes like Conscious (immediate context), Auto (dynamic search), and Combined for optimal performance.
Install via pip install memorisdk, initialize with Memori(conscious_ingest=True), and enable it. Configuration is flexible via environment variables or ConfigManager for production namespaces.
For more, check the documentation, join the Discord, or explore community contributions under Apache 2.0 license. With over 6,000 stars, Memori is rapidly becoming a go-to for building stateful AI applications.