Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -234,6 +234,6 @@ apps/openwork-memos-integration/apps/desktop/public/assets/usecases/
# Outputs and Evaluation Results
outputs

evaluation/data/temporal_locomo
evaluation/data/
test_add_pipeline.py
test_file_pipeline.py
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ pre_commit:
poetry run pre-commit run -a

serve:
poetry run uvicorn memos.api.start_api:app
poetry run uvicorn memos.api.server_api:app

openapi:
poetry run memos export_openapi --output docs/openapi.json
15 changes: 8 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@
- [**72% lower token usage**](https://x.com/MemOS_dev/status/2020854044583924111) — intelligent memory retrieval instead of loading full chat history
- [**Multi-agent memory sharing**](https://x.com/MemOS_dev/status/2020538135487062094) — multi-instance agents share memory via same user_id, automatic context handoff

Get your API key: [MemOS Dashboard](https://memos-dashboard.openmem.net/cn/login/)
Get your API key: [MemOS Dashboard](https://memos-dashboard.openmem.net/cn/login/)
Full tutorial → [MemOS-Cloud-OpenClaw-Plugin](https://github.com/MemTensor/MemOS-Cloud-OpenClaw-Plugin)

### 🧠 Local Plugin — 100% On-Device Memory
Expand All @@ -84,7 +84,7 @@ Full tutorial → [MemOS-Cloud-OpenClaw-Plugin](https://github.com/MemTensor/Mem
- **Hybrid search + task & skill evolution** — FTS5 + vector search, auto task summarization, reusable skills that self-upgrade
- **Multi-agent collaboration + Memory Viewer** — memory isolation, skill sharing, full web dashboard with 7 management pages

🌐 [Homepage](https://memos-claw.openmem.net) ·
🌐 [Homepage](https://memos-claw.openmem.net) ·
📖 [Documentation](https://memos-claw.openmem.net/docs/index.html) · 📦 [NPM](https://www.npmjs.com/package/@memtensor/memos-local-openclaw-plugin)

## 📌 MemOS: Memory Operating System for AI Agents
Expand All @@ -104,10 +104,10 @@ Full tutorial → [MemOS-Cloud-OpenClaw-Plugin](https://github.com/MemTensor/Mem

### News

- **2026-03-08** · 🦞 **MemOS OpenClaw Plugin — Cloud & Local**
- **2026-03-08** · 🦞 **MemOS OpenClaw Plugin — Cloud & Local**
Official OpenClaw memory plugins launched. **Cloud Plugin**: hosted memory service with 72% lower token usage and multi-agent memory sharing ([MemOS-Cloud-OpenClaw-Plugin](https://github.com/MemTensor/MemOS-Cloud-OpenClaw-Plugin)). **Local Plugin** (`v1.0.0`): 100% on-device memory with persistent SQLite, hybrid search (FTS5 + vector), task summarization & skill evolution, multi-agent collaboration, and a full Memory Viewer dashboard.

- **2025-12-24** · 🎉 **MemOS v2.0: Stardust (星尘) Release**
- **2025-12-24** · 🎉 **MemOS v2.0: Stardust (星尘) Release**
Comprehensive KB (doc/URL parsing + cross-project sharing), memory feedback & precise deletion, multi-modal memory (images/charts), tool memory for agent planning, Redis Streams scheduling + DB optimizations, streaming/non-streaming chat, MCP upgrade, and lightweight quick/full deployment.
<details>
<summary>✨ <b>New Features</b></summary>
Expand Down Expand Up @@ -155,7 +155,7 @@ Full tutorial → [MemOS-Cloud-OpenClaw-Plugin](https://github.com/MemTensor/Mem
</details>

- **2025-08-07** · 🎉 **MemOS v1.0.0 (MemCube) Release**
First MemCube release with a word-game demo, LongMemEval evaluation, BochaAISearchRetriever integration, NebulaGraph support, improved search capabilities, and the official Playground launch.
First MemCube release with a word-game demo, LongMemEval evaluation, BochaAISearchRetriever integration, improved search capabilities, and the official Playground launch.

<details>
<summary>✨ <b>New Features</b></summary>
Expand All @@ -176,7 +176,7 @@ Full tutorial → [MemOS-Cloud-OpenClaw-Plugin](https://github.com/MemTensor/Mem

**Plaintext Memory**
- Integrated internet search with Bocha.
- Added support for Nebula database.
- Expanded graph database support.
- Added contextual understanding for the tree-structured plaintext memory search interface.

</details>
Expand All @@ -188,7 +188,7 @@ Full tutorial → [MemOS-Cloud-OpenClaw-Plugin](https://github.com/MemTensor/Mem
- Fixed the concat_cache method.

**Plaintext Memory**
- Fixed Nebula search-related issues.
- Fixed graph search-related issues.

</details>

Expand Down Expand Up @@ -224,6 +224,7 @@ Full tutorial → [MemOS-Cloud-OpenClaw-Plugin](https://github.com/MemTensor/Mem
2. Configure `docker/.env.example` and copy to `MemOS/.env`
- The `OPENAI_API_KEY`,`MOS_EMBEDDER_API_KEY`,`MEMRADER_API_KEY` and others can be applied for through [`BaiLian`](https://bailian.console.aliyun.com/?spm=a2c4g.11186623.0.0.2f2165b08fRk4l&tab=api#/api).
- Fill in the corresponding configuration in the `MemOS/.env` file.
- Supported LLM providers: **OpenAI**, **Azure OpenAI**, **Qwen (DashScope)**, **DeepSeek**, **MiniMax**, **Ollama**, **HuggingFace**, **vLLM**. Set `MOS_CHAT_MODEL_PROVIDER` to select the backend (e.g., `openai`, `qwen`, `deepseek`, `minimax`).
3. Start the service.

- Launch via Docker
Expand Down
4 changes: 4 additions & 0 deletions apps/memos-local-openclaw/.env.example
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,10 @@ SUMMARIZER_TEMPERATURE=0
# Port for the web-based Memory Viewer (default: 18799)
# VIEWER_PORT=18799

# ─── Tavily Search (optional) ───
# API key for Tavily web search (get from https://app.tavily.com)
# TAVILY_API_KEY=tvly-your-tavily-api-key

# ─── Telemetry (opt-out) ───
# Anonymous usage analytics to help improve the plugin.
# No memory content, queries, or personal data is ever sent — only tool names, latencies, and version info.
Expand Down
13 changes: 10 additions & 3 deletions docker/.env.example
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,12 @@ MOS_MAX_TOKENS=2048
# Top-P for LLM in the Product API
MOS_TOP_P=0.9
# LLM for the Product API backend
MOS_CHAT_MODEL_PROVIDER=openai # openai | huggingface | vllm
MOS_CHAT_MODEL_PROVIDER=openai # openai | huggingface | vllm | minimax
OPENAI_API_KEY=sk-xxx # [required] when provider=openai
OPENAI_API_BASE=https://api.openai.com/v1 # [required] base for the key
# MiniMax LLM (when provider=minimax)
# MINIMAX_API_KEY=your-minimax-api-key # [required] when provider=minimax
# MINIMAX_API_BASE=https://api.minimax.io/v1 # base for MiniMax API

## MemReader / retrieval LLM
MEMRADER_MODEL=gpt-4o-mini
Expand Down Expand Up @@ -80,8 +83,12 @@ EMBEDDING_MODEL=nomic-embed-text:latest
## Internet search & preference memory
# Enable web search
ENABLE_INTERNET=false
# Internet search backend (bocha | tavily)
INTERNET_SEARCH_BACKEND=bocha
# API key for BOCHA Search
BOCHA_API_KEY= # required if ENABLE_INTERNET=true
BOCHA_API_KEY= # required if ENABLE_INTERNET=true and backend=bocha
# API key for Tavily Search
TAVILY_API_KEY= # required if ENABLE_INTERNET=true and backend=tavily
# default search mode
SEARCH_MODE=fast # fast | fine | mixture
# Slow retrieval strategy configuration, rewrite is the rewrite strategy
Expand Down Expand Up @@ -127,7 +134,7 @@ MEMSCHEDULER_USE_REDIS_QUEUE=false

## Graph / vector stores
# Neo4j database selection mode
NEO4J_BACKEND=neo4j-community # neo4j-community | neo4j | nebular | polardb
NEO4J_BACKEND=neo4j-community # neo4j-community | neo4j | polardb | postgres
# Neo4j database url
NEO4J_URI=bolt://localhost:7687 # required when backend=neo4j*
# Neo4j database user
Expand Down
2 changes: 1 addition & 1 deletion docker/docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ services:
- memos_network

neo4j:
image: neo4j:5.26.4
image: neo4j:5.26.6
container_name: neo4j-docker
ports:
- "7474:7474" # HTTP
Expand Down
1 change: 1 addition & 0 deletions docker/requirements-full.txt
Original file line number Diff line number Diff line change
Expand Up @@ -185,3 +185,4 @@ py-key-value-shared==0.2.8
PyJWT==2.10.1
pytest==9.0.2
alibabacloud-oss-v2==1.2.2
tavily-python==0.5.0
1 change: 1 addition & 0 deletions docker/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -124,3 +124,4 @@ uvloop==0.22.1; sys_platform != 'win32'
watchfiles==1.1.1
websockets==15.0.1
alibabacloud-oss-v2==1.2.2
tavily-python==0.5.0
34 changes: 32 additions & 2 deletions examples/basic_modules/llm.py
Original file line number Diff line number Diff line change
Expand Up @@ -164,7 +164,37 @@
print("Scenario 6:", resp)


# Scenario 7: Using LLMFactory with Deepseek-chat + reasoning + CoT + streaming
# Scenario 7: Using LLMFactory with MiniMax (OpenAI-compatible API)
# Prerequisites:
# 1. Get your API key from the MiniMax platform.
# 2. Available models: MiniMax-M2.7 (flagship), MiniMax-M2.7-highspeed (low-latency),
# MiniMax-M2.5, MiniMax-M2.5-highspeed.

cfg_mm = LLMConfigFactory.model_validate(
{
"backend": "minimax",
"config": {
"model_name_or_path": "MiniMax-M2.7",
"api_key": "your-minimax-api-key",
"api_base": "https://api.minimax.io/v1",
"temperature": 0.7,
"max_tokens": 1024,
},
}
)
llm = LLMFactory.from_config(cfg_mm)
messages = [{"role": "user", "content": "Hello, who are you"}]
resp = llm.generate(messages)
print("Scenario 7:", resp)
print("==" * 20)

print("Scenario 7 (streaming):\n")
for chunk in llm.generate_stream(messages):
print(chunk, end="")
print("\n" + "==" * 20)


# Scenario 8: Using LLMFactory with DeepSeek-chat + reasoning + CoT + streaming

cfg2 = LLMConfigFactory.model_validate(
{
Expand All @@ -186,7 +216,7 @@
"content": "Explain how to solve this problem step-by-step. Be explicit in your thinking process. Question: If a train travels from city A to city B at 60 mph and returns at 40 mph, what is its average speed for the entire trip? Let's think step by step.",
},
]
print("Scenario 7:\n")
print("Scenario 8:\n")
for chunk in llm.generate_stream(messages):
print(chunk, end="")
print("==" * 20)
83 changes: 48 additions & 35 deletions examples/basic_modules/neo4j_example.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,21 +2,36 @@

from datetime import datetime

from dotenv import load_dotenv

from memos.configs.embedder import EmbedderConfigFactory
from memos.configs.graph_db import GraphDBConfigFactory
from memos.embedders.factory import EmbedderFactory
from memos.graph_dbs.factory import GraphStoreFactory
from memos.memories.textual.item import TextualMemoryItem, TreeNodeTextualMemoryMetadata


load_dotenv()

NEO4J_URI = os.getenv("NEO4J_URI", "bolt://localhost:7687")
NEO4J_USER = os.getenv("NEO4J_USER", "neo4j")
NEO4J_PASSWORD = os.getenv("NEO4J_PASSWORD", "12345678")
NEO4J_DB_NAME = os.getenv("NEO4J_DB_NAME", "neo4j")
EMBEDDING_DIMENSION = int(os.getenv("EMBEDDING_DIMENSION", "3072"))

QDRANT_HOST = os.getenv("QDRANT_HOST", "localhost")
QDRANT_PORT = int(os.getenv("QDRANT_PORT", "6333"))

embedder_config = EmbedderConfigFactory.model_validate(
{
"backend": "universal_api",
"backend": os.getenv("MOS_EMBEDDER_BACKEND", "universal_api"),
"config": {
"provider": "openai",
"api_key": os.getenv("OPENAI_API_KEY", "sk-xxxxx"),
"model_name_or_path": "text-embedding-3-large",
"base_url": os.getenv("OPENAI_API_BASE", "https://api.openai.com/v1"),
"provider": os.getenv("MOS_EMBEDDER_PROVIDER", "openai"),
"api_key": os.getenv("MOS_EMBEDDER_API_KEY", os.getenv("OPENAI_API_KEY", "")),
"model_name_or_path": os.getenv("MOS_EMBEDDER_MODEL", "text-embedding-3-large"),
"base_url": os.getenv(
"MOS_EMBEDDER_API_BASE", os.getenv("OPENAI_API_BASE", "https://api.openai.com/v1")
),
},
}
)
Expand All @@ -31,12 +46,12 @@ def get_neo4j_graph(db_name: str = "paper"):
config = GraphDBConfigFactory(
backend="neo4j",
config={
"uri": "bolt://xxxx:7687",
"user": "neo4j",
"password": "xxxx",
"uri": NEO4J_URI,
"user": NEO4J_USER,
"password": NEO4J_PASSWORD,
"db_name": db_name,
"auto_create": True,
"embedding_dimension": 3072,
"embedding_dimension": EMBEDDING_DIMENSION,
"use_multi_db": True,
},
)
Expand All @@ -49,12 +64,12 @@ def example_multi_db(db_name: str = "paper"):
config = GraphDBConfigFactory(
backend="neo4j",
config={
"uri": "bolt://localhost:7687",
"user": "neo4j",
"password": "12345678",
"uri": NEO4J_URI,
"user": NEO4J_USER,
"password": NEO4J_PASSWORD,
"db_name": db_name,
"auto_create": True,
"embedding_dimension": 3072,
"embedding_dimension": EMBEDDING_DIMENSION,
"use_multi_db": True,
},
)
Expand Down Expand Up @@ -288,14 +303,14 @@ def example_shared_db(db_name: str = "shared-traval-group"):
config = GraphDBConfigFactory(
backend="neo4j",
config={
"uri": "bolt://localhost:7687",
"user": "neo4j",
"password": "12345678",
"uri": NEO4J_URI,
"user": NEO4J_USER,
"password": NEO4J_PASSWORD,
"db_name": db_name,
"user_name": user_name,
"use_multi_db": False,
"auto_create": True,
"embedding_dimension": 3072,
"embedding_dimension": EMBEDDING_DIMENSION,
},
)
# Step 2: Instantiate graph store
Expand Down Expand Up @@ -353,12 +368,12 @@ def example_shared_db(db_name: str = "shared-traval-group"):
config_alice = GraphDBConfigFactory(
backend="neo4j",
config={
"uri": "bolt://localhost:7687",
"user": "neo4j",
"password": "12345678",
"uri": NEO4J_URI,
"user": NEO4J_USER,
"password": NEO4J_PASSWORD,
"db_name": db_name,
"user_name": user_list[0],
"embedding_dimension": 3072,
"embedding_dimension": EMBEDDING_DIMENSION,
},
)
graph_alice = GraphStoreFactory.from_config(config_alice)
Expand All @@ -382,24 +397,22 @@ def run_user_session(
config = GraphDBConfigFactory(
backend="neo4j-community",
config={
"uri": "bolt://localhost:7687",
"user": "neo4j",
"password": "12345678",
"uri": NEO4J_URI,
"user": NEO4J_USER,
"password": NEO4J_PASSWORD,
"db_name": db_name,
"user_name": user_name,
"use_multi_db": False,
"auto_create": False, # Neo4j Community does not allow auto DB creation
"embedding_dimension": 3072,
"auto_create": False,
"embedding_dimension": EMBEDDING_DIMENSION,
"vec_config": {
# Pass nested config to initialize external vector DB
# If you use qdrant, please use Server instead of local mode.
"backend": "qdrant",
"config": {
"collection_name": "neo4j_vec_db",
"vector_dimension": 3072,
"vector_dimension": EMBEDDING_DIMENSION,
"distance_metric": "cosine",
"host": "localhost",
"port": 6333,
"host": QDRANT_HOST,
"port": QDRANT_PORT,
},
},
},
Expand All @@ -408,14 +421,14 @@ def run_user_session(
config = GraphDBConfigFactory(
backend="neo4j",
config={
"uri": "bolt://localhost:7687",
"user": "neo4j",
"password": "12345678",
"uri": NEO4J_URI,
"user": NEO4J_USER,
"password": NEO4J_PASSWORD,
"db_name": db_name,
"user_name": user_name,
"use_multi_db": False,
"auto_create": True,
"embedding_dimension": 3072,
"embedding_dimension": EMBEDDING_DIMENSION,
},
)
graph = GraphStoreFactory.from_config(config)
Expand Down
Loading
Loading