Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
47 commits
Select commit Hold shift + click to select a range
11b748f
update requirements (#772)
pursues Dec 24, 2025
b27bead
update scheduler and add operation for dehallucination (#769)
tangg555 Dec 24, 2025
a1746fb
fix: update README.md (#774)
zZhangSir Dec 24, 2025
374c80c
Merge branch 'main' into dev
CaralHsi Dec 24, 2025
fc70e9f
Dev zhq new (#776)
zZhangSir Dec 25, 2025
3873adb
feat: add export_graph data page (#778)
wustzdy Dec 25, 2025
1ee536a
fix: optimize Neo4j Community Edition support and enhance MCP environ…
fancyboi999 Dec 25, 2025
5cf0282
fix: add feedback change to preference (#771)
whipser030 Dec 25, 2025
b11c768
fix: improve chat playground stability and chat handler initializatio…
Wang-Daoji Dec 25, 2025
de0376c
Feat: add OpenAI log (#785)
CarltonXiang Dec 25, 2025
3e4b342
Feat/dedup playground display (#789)
Wang-Daoji Dec 25, 2025
10342ef
add get_user_names_by_memory_ids api (#790)
Wang-Daoji Dec 25, 2025
fac1aa7
feat: add batch delete (#787)
wustzdy Dec 26, 2025
336a2be
feat: add dedup search param (#788)
glin93 Dec 26, 2025
748ef3d
Dev zdy 1226 page (#796)
wustzdy Dec 26, 2025
21df1c7
Patch: get_memory adds the page size parameter function and the filte…
whipser030 Dec 29, 2025
99dcf1d
Dev zdy 1229 (#802)
wustzdy Dec 29, 2025
17afbe7
feat: add get_user_names_by_memory_ids for polardb && neo4j (#803)
wustzdy Dec 29, 2025
9c25b46
feat: add export_graph total (#804)
wustzdy Dec 30, 2025
2ee0754
add: get_memory return edges and count of items (#805)
whipser030 Dec 30, 2025
63987d5
Scheduler: address some issues to run old scheduler example and kv ca…
tangg555 Dec 30, 2025
01172f3
add: milvus return data pagination (#806)
whipser030 Dec 30, 2025
acb5799
feat: update source return and chunk settings (#808)
fridayL Dec 30, 2025
9dba332
Scheduler: update exampels (#807)
tangg555 Dec 30, 2025
c88f17a
feat: add delete_node_by_prams filter (#810)
wustzdy Dec 30, 2025
791c2ee
fix: merge all preference (#811)
whipser030 Dec 30, 2025
c152f44
feat: update _build_filter_conditions_sql in conditions && build_cyph…
wustzdy Dec 31, 2025
c0b7228
feat: _build_filter_conditions_sql filter (#813)
wustzdy Dec 31, 2025
7993c3a
fix: update deprecated APIs for chonkie v1.4.0 and qdrant-client v1.1…
zhixiangxue Dec 31, 2025
03b79a2
feat: update code format (#814)
fridayL Dec 31, 2025
8819cc5
Feat/optimize cloud service api (#816)
Wang-Daoji Jan 4, 2026
5349674
fix: [PrefEval Evaluation] propagate --lib and --version arguments in…
fancyboi999 Jan 4, 2026
b3c9e84
fix: fix context error and empty embedding (#817)
fridayL Jan 4, 2026
38f9e2f
Feat/optimize cloud service api (#818)
Wang-Daoji Jan 4, 2026
7142baa
Feat/optimize cloud service api (#820)
Wang-Daoji Jan 4, 2026
2a91bd6
add exist_user_name for neo4j.py (#821)
wustzdy Jan 5, 2026
c5b6f15
Feat/optimize cloud service api (#822)
Wang-Daoji Jan 5, 2026
0abb555
Feat/optimize cloud service api (#825)
Wang-Daoji Jan 6, 2026
85860ce
feat: add filter time query (#826)
wustzdy Jan 6, 2026
d632dde
add getMemory sdk (#827)
wustzdy Jan 6, 2026
bbca35f
feat: Merge from main (some hot-fix) (#832)
CaralHsi Jan 7, 2026
1f7b227
Merge branch 'main' into dev
CaralHsi Jan 7, 2026
9c363b4
feat: add OpenAI token log (#831)
CarltonXiang Jan 7, 2026
8d63060
fix: feedback llm output fail to load json (#833)
whipser030 Jan 7, 2026
0e41b64
fix: Use env exchange overrides for all scheduler messages (#834)
glin93 Jan 7, 2026
3ee82f3
feat: support single-assistant mem-reader (#835)
CaralHsi Jan 7, 2026
c8238d5
feat: tmp optimize (#846)
Wang-Daoji Jan 9, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 5 additions & 17 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,6 @@
MemOS is an open-source **Agent Memory framework** that empowers AI agents with **long-term memory, personality consistency, and contextual recall**. It enables agents to **remember past interactions**, **learn over time**, and **build evolving identities** across sessions.

Designed for **AI companions, role-playing NPCs, and multi-agent systems**, MemOS provides a unified API for **memory representation, retrieval, and update** — making it the foundation for next-generation **memory-augmented AI agents**.

🆕 **MemOS 2.0** introduces **knowledge base system**, **multi-modal memory** (images & documents), **tool memory** for Agent optimization, **memory feedback mechanism** for precise control, and **enterprise-grade architecture** with Redis Streams scheduler and advanced DB optimizations.
<div align="center">
<a href="https://memos.openmem.net/">
<img src="https://statics.memtensor.com.cn/memos/memos-banner.gif" alt="MemOS Banner">
Expand Down Expand Up @@ -117,17 +115,6 @@ showcasing its capabilities in **information extraction**, **temporal and cross-
- **Textual Memory**: For storing and retrieving unstructured or structured text knowledge.
- **Activation Memory**: Caches key-value pairs (`KVCacheMemory`) to accelerate LLM inference and context reuse.
- **Parametric Memory**: Stores model adaptation parameters (e.g., LoRA weights).
- **Tool Memory** 🆕: Records Agent tool call trajectories and experiences to improve planning capabilities.
- **📚 Knowledge Base System** 🆕: Build multi-dimensional knowledge bases with automatic document/URL parsing, splitting, and cross-project sharing capabilities.
- **🔧 Memory Controllability** 🆕:
- **Feedback Mechanism**: Use `add_feedback` API to correct, supplement, or replace existing memories with natural language.
- **Precise Deletion**: Delete specific memories by User ID or Memory ID via API or MCP tools.
- **👁️ Multi-Modal Support** 🆕: Support for image understanding and memory, including chart parsing in documents.
- **⚡ Advanced Architecture**:
- **DB Optimization**: Enhanced connection management and batch insertion for high-concurrency scenarios.
- **Advanced Retrieval**: Custom tag and info field filtering with complex logical operations.
- **Redis Streams Scheduler**: Multi-level queue architecture with intelligent orchestration for fair multi-tenant scheduling.
- **Stream & Non-Stream Chat**: Ready-to-use streaming and non-streaming chat interfaces.
- **🔌 Extensible**: Easily extend and customize memory modules, data sources, and LLM integrations.
- **🏂 Lightweight Deployment** 🆕: Support for quick mode and complete mode deployment options.

Expand Down Expand Up @@ -181,6 +168,7 @@ res = client.search_memory(query=query, user_id=user_id, conversation_id=convers
print(f"result: {res}")
```


### Self-Hosted Server
1. Get the repository.
```bash
Expand Down Expand Up @@ -215,7 +203,7 @@ Example
```python
import requests
import json

data = {
"user_id": "8736b16e-1d20-4163-980b-a5063c3facdc",
"mem_cube_id": "b32d0977-435d-4828-a86f-4f47f8b55bca",
Expand All @@ -231,15 +219,15 @@ Example
"Content-Type": "application/json"
}
url = "http://localhost:8000/product/add"

res = requests.post(url=url, headers=headers, data=json.dumps(data))
print(f"result: {res.json()}")
```
- Search User Memory
```python
import requests
import json

data = {
"query": "What do I like",
"user_id": "8736b16e-1d20-4163-980b-a5063c3facdc",
Expand All @@ -249,7 +237,7 @@ Example
"Content-Type": "application/json"
}
url = "http://localhost:8000/product/search"

res = requests.post(url=url, headers=headers, data=json.dumps(data))
print(f"result: {res.json()}")
```
Expand Down
3 changes: 2 additions & 1 deletion docker/.env.example
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,7 @@ MODEL=gpt-4o-mini
# embedding model for evaluation
EMBEDDING_MODEL=nomic-embed-text:latest


## Internet search & preference memory
# Enable web search
ENABLE_INTERNET=false
Expand Down Expand Up @@ -211,4 +212,4 @@ MEMSCHEDULER_RABBITMQ_VIRTUAL_HOST=memos
# Erase connection state on connect for message-log pipeline
MEMSCHEDULER_RABBITMQ_ERASE_ON_CONNECT=true
# RabbitMQ port for message-log pipeline
MEMSCHEDULER_RABBITMQ_PORT=5672
MEMSCHEDULER_RABBITMQ_PORT=5672
2 changes: 1 addition & 1 deletion docker/requirements-full.txt
Original file line number Diff line number Diff line change
Expand Up @@ -183,4 +183,4 @@ psycopg2-binary==2.9.11
py-key-value-aio==0.2.8
py-key-value-shared==0.2.8
PyJWT==2.10.1
pytest==9.0.2
pytest==9.0.2
5 changes: 1 addition & 4 deletions docker/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,6 +1,3 @@
# Docker optimized requirements - Core dependencies only
# Excludes Windows-specific and heavy GPU packages for faster builds

annotated-types==0.7.0
anyio==4.11.0
attrs==25.4.0
Expand Down Expand Up @@ -125,4 +122,4 @@ urllib3==2.5.0
uvicorn==0.38.0
uvloop==0.22.1; sys_platform != 'win32'
watchfiles==1.1.1
websockets==15.0.1
websockets==15.0.1
2 changes: 1 addition & 1 deletion docs/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
All documentation has been moved to a separate repository: https://github.com/MemTensor/MemOS-Docs. Please edit documentation there.

所有文档已迁移至独立仓库https://github.com/MemTensor/MemOS-Docs。请在该仓库中编辑文档。
所有文档已迁移至独立仓库 https://github.com/MemTensor/MemOS-Docs 。请在该仓库中编辑文档。
8 changes: 6 additions & 2 deletions evaluation/scripts/run_prefeval_eval.sh
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,9 @@ python $LIB_SCRIPT search \
--input $IDS_FILE \
--output $SEARCH_FILE \
--top-k $TOP_K \
--max-workers $WORKERS
--max-workers $WORKERS \
--lib $LIB \
--version $VERSION

if [ $? -ne 0 ]; then
echo "Error: $LIB_SCRIPT 'search' mode failed."
Expand All @@ -121,7 +123,9 @@ echo "Running $LIB_SCRIPT in 'response' mode..."
python $LIB_SCRIPT response \
--input $SEARCH_FILE \
--output $RESPONSE_FILE \
--max-workers $WORKERS
--max-workers $WORKERS \
--lib $LIB \
--version $VERSION

if [ $? -ne 0 ]; then
echo "Error: $LIB_SCRIPT 'response' mode failed."
Expand Down
21 changes: 21 additions & 0 deletions examples/data/config/mem_scheduler/mem_cube_config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
user_id: "user_test"
cube_id: "user_test/mem_cube_naive"
text_mem:
backend: "naive_text"
config:
extractor_llm:
backend: "huggingface_singleton"
config:
model_name_or_path: "Qwen/Qwen3-0.6B"
temperature: 0.1
max_tokens: 1024
act_mem:
backend: "kv_cache"
config:
memory_filename: "activation_memory.pickle"
extractor_llm:
backend: "huggingface_singleton"
config:
model_name_or_path: "Qwen/Qwen3-0.6B"
temperature: 0.8
max_tokens: 1024
12 changes: 4 additions & 8 deletions examples/data/config/mem_scheduler/memos_config_w_scheduler.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,16 +10,12 @@ mem_reader:
backend: "simple_struct"
config:
llm:
backend: "openai"
backend: "huggingface_singleton"
config:
model_name_or_path: "gpt-4o-mini"
temperature: 0.8
max_tokens: 4096
top_p: 0.9
top_k: 50
model_name_or_path: "Qwen/Qwen3-1.7B"
temperature: 0.1
remove_think_prefix: true
api_key: "sk-xxxxxx"
api_base: "https://api.openai.com/v1"
max_tokens: 4096
embedder:
backend: "ollama"
config:
Expand Down
Loading