📢 Announcing our research paper: Zentry achieves 26% higher accuracy than OpenAI Memory, 91% lower latency, and 90% token savings! Read the paper to learn how we're revolutionizing AI agent memory.
Core Operations
Zentry exposes two main endpoints for interacting with memories:- The
addendpoint for ingesting conversations and storing them as memories - The
searchendpoint for retrieving relevant memories based on queries
Adding Memories

Architecture diagram illustrating the process of adding memories.
-
Information Extraction
- An LLM extracts relevant memories from the conversation
- It identifies important entities and their relationships
-
Conflict Resolution
- The system compares new information with existing data
- It identifies and resolves any contradictions
-
Memory Storage
- Vector database stores the actual memories
- Graph database maintains relationship information
- Information is continuously updated with each interaction
Searching Memories

Architecture diagram illustrating the memory search process.
-
Query Processing
- LLM processes and optimizes the search query
- System prepares filters for targeted search
-
Vector Search
- Performs semantic search using the optimized query
- Ranks results by relevance to the query
- Applies specified filters (user, agent, metadata, etc.)
-
Result Processing
- Combines and ranks the search results
- Returns memories with relevance scores
- Includes associated metadata and timestamps