Add persistent memory to LLM apps with millisecond recall times. Store, retrieve, and personalize user data across sessions with enterprise-grade security.

Overview:

Supermemory is an open-source memory and context engine designed for AI systems. It addresses the problem of AI tools forgetting information between conversations by automatically extracting facts, building user profiles, handling knowledge updates and contradictions, and forgetting expired information. It is positioned for two main user groups: individual users of AI tools who want persistent memory across conversations, and developers building AI agents or applications who need a memory, RAG, and context stack through a single API. It ranks #1 on the LongMemEval, LoCoMo, and ConvoMem benchmarks for AI memory.

Core Features:

  • Memory Extraction: Automatically extracts facts from conversations, handling temporal changes, contradictions, and automatic forgetting of expired information.

  • User Profiles: Auto-maintained user context combining stable facts and recent activity, accessible via a single API call in approximately 50ms.

  • Hybrid Search: Combines RAG (knowledge base document retrieval) with personalized memory context in a single query.

  • Connectors: Syncs data from external services including Google Drive, Gmail, Notion, OneDrive, and GitHub, with real-time webhook support.

  • Multi-modal Extractors: Processes PDFs with OCR, videos with transcription, and code with AST-aware chunking.

  • MCP Server: Provides a standard interface for AI clients (Claude Desktop, Cursor, Windsurf, VS Code, Claude Code, OpenCode, OpenClaw, Hermes) to call memory, recall, and context tools.

Use Cases:

  • AI tool users wanting persistent memory: Installing Supermemory as a browser extension or MCP server so their AI assistant (e.g., Claude, Cursor) remembers preferences, projects, and past discussions across sessions.

  • Developers building AI agents or apps: Integrating memory, user profiles, RAG, and file processing into AI applications using a single API, without configuring vector databases, embedding pipelines, or chunking strategies.

  • Organizing context by project or client: Using scoped "projects" (container tags) within Supermemory to separate work and personal context, or to organize by client or repository.

Why It Matters:

As an open-source project, Supermemory provides a self-contained memory layer for AI that is benchmarked as state-of-the-art. Its value lies in offering a complete context stack (memory extraction, user profiles, hybrid search, connectors, and file processing) through a single API or MCP server, which reduces the need for developers to build and maintain separate vector database, embedding, and chunking infrastructure. The inclusion of automatic forgetting and contradiction resolution also addresses specific technical challenges in persistent AI memory.

PartagerXLinkedInReddit

Outils associés

Statistiques du projet

Étoiles

22,347

Forks

2,047

Licence

MIT

Métadonnées

Alternative à
LangChain