- 🎯 Accurate: [48.77% Accuracy Improvement] More accurate than full-context in the LOCOMO benchmark (78.70% VS 52.9%)
- ⚡ Agile: [91.83% Faster Response] Significantly reduced p95 latency for retrieval compared to full-context (1.44s VS 17.12s)
- 💰 Affordable: [96.53% Token Reduction] Significantly reduced costs compared to full-context without sacrificing performance (0.9k VS 26k)
In AI application development, enabling large language models to persistently "remember" historical conversations, user preferences, and contextual information is a core challenge. PowerMem combines a hybrid storage architecture of vector retrieval, full-text search, and graph databases, and introduces the Ebbinghaus forgetting curve theory from cognitive science to build a powerful memory infrastructure for AI applications. The system also provides comprehensive multi-agent support capabilities, including agent memory isolation, cross-agent collaboration and sharing, fine-grained permission control, and privacy protection mechanisms, enabling multiple AI agents to achieve efficient collaboration while maintaining independent memory spaces.
- 🔌 Lightweight Integration: Provides a simple Python SDK, automatically loads configuration from
.envfiles, enabling developers to quickly integrate into existing projects. Also supports MCP Server and HTTP API Server integration methods
- 🔍 Intelligent Memory Extraction: Automatically extracts key facts from conversations through LLM, intelligently detects duplicates, updates conflicting information, and merges related memories to ensure accuracy and consistency of the memory database
- 📉 Ebbinghaus Forgetting Curve: Based on the memory forgetting patterns from cognitive science, automatically calculates memory retention rates and implements time-decay weighting, prioritizing recent and relevant memories, allowing AI systems to naturally "forget" outdated information like humans
- 🎭 User Profile: Automatically builds and updates user profiles based on historical conversations and behavioral data, applicable to scenarios such as personalized recommendations and AI companionship, enabling AI systems to better understand and serve each user
- 🔐 Agent Shared/Isolated Memory: Provides independent memory spaces for each agent, supports cross-agent memory sharing and collaboration, and enables flexible permission management through scope control
- 🖼️ Text, Image, and Audio Memory: Automatically converts images and audio to text descriptions for storage, supports retrieval of multimodal mixed content (text + image + audio), enabling AI systems to understand richer contextual information
- 📦 Sub Stores Support: Implements data partition management through sub stores, supports automatic query routing, significantly improving query performance and resource utilization for ultra-large-scale data
- 🔗 Hybrid Retrieval: Combines multi-channel recall capabilities of vector retrieval, full-text search, and graph retrieval, builds knowledge graphs through LLM and supports multi-hop graph traversal for precise retrieval of complex memory relationships
pip install powermem✨ Simplest Way: Create memory from .env file automatically! Configuration Reference
from powermem import Memory, auto_config
# Load configuration (auto-loads from .env)
config = auto_config()
# Create memory instance
memory = Memory(config=config)
# Add memory
memory.add("User likes coffee", user_id="user123")
# Search memories
results = memory.search("user preferences", user_id="user123")
for result in results.get('results', []):
print(f"- {result.get('memory')}")For more detailed examples and usage patterns, see the Getting Started Guide.
PowerMem also provides a production-ready HTTP API server that exposes all core memory management capabilities through RESTful APIs. This enables any application that supports HTTP calls to integrate PowerMem's intelligent memory system, regardless of programming language.
Relationship with SDK: The API server uses the same PowerMem SDK under the hood and shares the same configuration (.env file). It provides an HTTP interface to the same memory management features available in the Python SDK, making PowerMem accessible to non-Python applications.
Starting the API Server:
# Method 1: Using CLI command (after pip install)
powermem-server --host 0.0.0.0 --port 8000
# Method 2: Using Docker
# run with Docker
docker run -d \
--name powermem-server \
-p 8000:8000 \
-v $(pwd)/.env:/app/.env:ro \
--env-file .env \
oceanbase/powermem-server:latest
# Or use Docker Compose (recommended)
docker-compose -f docker/docker-compose.yml up -d
Once started, the API server provides:
- RESTful API endpoints for all memory operations
- Interactive API documentation at
http://localhost:8000/docs - API Key authentication and rate limiting support
- Same configuration as SDK (via
.envfile)
For complete API documentation and usage examples, see the API Server Documentation.
PowerMem also provides a Model Context Protocol (MCP) server that enables integration with MCP-compatible clients such as Claude Desktop. The MCP server exposes PowerMem's memory management capabilities through the MCP protocol, allowing AI assistants to access and manage memories seamlessly.
Relationship with SDK: The MCP server uses the same PowerMem SDK and shares the same configuration (.env file). It provides an MCP interface to the same memory management features, making PowerMem accessible to MCP-compatible AI assistants.
Installation:
# Install PowerMem (required)
pip install powermem
# Install uvx (if not already installed)
# On macOS/Linux:
curl -LsSf https://astral.sh/uv/install.sh | sh
# On Windows:
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"Starting the MCP Server:
# SSE mode (recommended, default port 8000)
uvx powermem-mcp sse
# SSE mode with custom port
uvx powermem-mcp sse 8001
# Stdio mode
uvx powermem-mcp stdio
# Streamable HTTP mode (default port 8000)
uvx powermem-mcp streamable-http
# Streamable HTTP mode with custom port
uvx powermem-mcp streamable-http 8001Integration with Claude Desktop:
Add the following configuration to your Claude Desktop config file:
{
"mcpServers": {
"powermem": {
"url": "http://localhost:8000/mcp"
}
}
}The MCP server provides tools for memory management including adding, searching, updating, and deleting memories. For complete MCP documentation and usage examples, see the MCP Server Documentation.
- 🔗 LangChain Integration: Build medical support chatbot using LangChain + PowerMem + OceanBase, View Example
- 🔗 LangGraph Integration: Build customer service chatbot using LangGraph + PowerMem + OceanBase, View Example
- 📖 Getting Started: Installation and quick start guide
- ⚙️ Configuration Guide: Complete configuration options
- 🤖 Multi-Agent Guide: Multi-agent scenarios and examples
- 🔌 Integrations Guide: Integrations Guide
- 📦 Sub Stores Guide: Sub stores usage and examples
- 📋 API Documentation: Complete API reference
- 🏗️ Architecture Guide: System architecture and design
- 📓 Examples: Interactive Jupyter notebooks and use cases
- 👨💻 Development Documentation: Developer documentation
| Version | Release Date | Function |
|---|---|---|
| 0.3.0 | 2026.01.09 |
|
| 0.2.0 | 2025.12.16 |
|
| 0.1.0 | 2025.11.14 |
|
- 🐛 Issue Reporting: GitHub Issues
- 💭 Discussions: GitHub Discussions
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.