Multimodal Support
Integrate images into your interactions with Zentry
📢 Announcing our research paper: Zentry achieves 26% higher accuracy than OpenAI Memory, 91% lower latency, and 90% token savings! Read the paper to learn how we're revolutionizing AI agent memory.
Zentry extends its capabilities beyond text by supporting multimodal data. With this feature, users can seamlessly integrate images into their interactions—allowing Zentry to extract relevant information.
How It Works
When a user submits an image, Zentry processes it to extract textual information and other pertinent details. These details are then added to the user’s memory, enhancing the system’s ability to understand and recall multimodal inputs.
Using these methods, you can seamlessly incorporate various media types into your interactions, further enhancing Zentry’s multimodal capabilities.
If you have any questions, please feel free to reach out to us using one of the following methods: