š¢ Announcing our research paper: Zentry achieves 26% higher accuracy than OpenAI Memory, 91% lower latency, and 90% token savings! Read the paper to learn how we're revolutionizing AI agent memory.
To use DeepSeek LLM models, you have to set the DEEPSEEK_API_KEY
environment variable. You can also optionally set DEEPSEEK_API_BASE
if you need to use a different API endpoint (defaults to āhttps://api.deepseek.comā).
import os
from Zentry import Memory
os.environ["DEEPSEEK_API_KEY"] = "your-api-key"
os.environ["OPENAI_API_KEY"] = "your-api-key" # for embedder model
config = {
"llm": {
"provider": "deepseek",
"config": {
"model": "deepseek-chat", # default model
"temperature": 0.2,
"max_tokens": 2000,
"top_p": 1.0
}
}
}
m = Memory.from_config(config)
messages = [
{"role": "user", "content": "I'm planning to watch a movie tonight. Any recommendations?"},
{"role": "assistant", "content": "How about a thriller movies? They can be quite engaging."},
{"role": "user", "content": "Iām not a big fan of thriller movies but I love sci-fi movies."},
{"role": "assistant", "content": "Got it! I'll avoid thriller recommendations and suggest sci-fi movies in the future."}
]
m.add(messages, user_id="alice", metadata={"category": "movies"})
You can also configure the API base URL in the config:
config = {
"llm": {
"provider": "deepseek",
"config": {
"model": "deepseek-chat",
"deepseek_base_url": "https://your-custom-endpoint.com",
"api_key": "your-api-key" # alternatively to using environment variable
}
}
}
All available parameters for the deepseek
config are present in Master List of All Params in Config.