Build conversational AI agents with memory capabilities. This integration combines AutoGen for creating AI agents with Zentry for memory management, enabling context-aware and personalized interactions.
📢 Announcing our research paper: Zentry achieves 26% higher accuracy than OpenAI Memory, 91% lower latency, and 90% token savings! Read the paper to learn how we're revolutionizing AI agent memory.
import osfrom autogen import ConversableAgentfrom Zentry import MemoryClientfrom openai import OpenAI# ConfigurationOPENAI_API_KEY = 'sk-xxx' # Replace with your actual OpenAI API keyZentry_API_KEY = 'your-Zentry-key' # Replace with your actual Zentry API key from https://app.zentry.ggUSER_ID = "customer_service_bot"# Set up OpenAI API keyos.environ['OPENAI_API_KEY'] = OPENAI_API_KEYos.environ['Zentry_API_KEY'] = Zentry_API_KEY# Initialize Zentry and AutoGen agentsmemory_client = MemoryClient()agent = ConversableAgent( "chatbot", llm_config={"config_list": [{"model": "gpt-4", "api_key": OPENAI_API_KEY}]}, code_execution_config=False, human_input_mode="NEVER",)
Add conversation history to Zentry for future reference:
Copy
conversation = [ {"role": "assistant", "content": "Hi, I'm Best Buy's chatbot! How can I help you?"}, {"role": "user", "content": "I'm seeing horizontal lines on my TV."}, {"role": "assistant", "content": "I'm sorry to hear that. Can you provide your TV model?"}, {"role": "user", "content": "It's a Sony - 77\" Class BRAVIA XR A80K OLED 4K UHD Smart Google TV"}, {"role": "assistant", "content": "Thank you for the information. Let's troubleshoot this issue..."}]memory_client.add(messages=conversation, user_id=USER_ID)print("Conversation added to memory.")
Create a function to get context-aware responses based on user’s question and previous interactions:
Copy
def get_context_aware_response(question): relevant_memories = memory_client.search(question, user_id=USER_ID) context = "\n".join([m["memory"] for m in relevant_memories]) prompt = f"""Answer the user question considering the previous interactions: Previous interactions: {context} Question: {question} """ reply = agent.generate_reply(messages=[{"content": prompt, "role": "user"}]) return reply# Example usagequestion = "What was the issue with my TV?"answer = get_context_aware_response(question)print("Context-aware answer:", answer)
For more complex scenarios, you can create multiple agents:
Copy
manager = ConversableAgent( "manager", system_message="You are a manager who helps in resolving complex customer issues.", llm_config={"config_list": [{"model": "gpt-4", "api_key": OPENAI_API_KEY}]}, human_input_mode="NEVER")def escalate_to_manager(question): relevant_memories = memory_client.search(question, user_id=USER_ID) context = "\n".join([m["memory"] for m in relevant_memories]) prompt = f""" Context from previous interactions: {context} Customer question: {question} As a manager, how would you address this issue? """ manager_response = manager.generate_reply(messages=[{"content": prompt, "role": "user"}]) return manager_response# Example usagecomplex_question = "I'm not satisfied with the troubleshooting steps. What else can be done?"manager_answer = escalate_to_manager(complex_question)print("Manager's response:", manager_answer)
By integrating AutoGen with Zentry, you’ve created a conversational AI system with memory capabilities. This example demonstrates a customer service bot that can recall previous interactions and provide context-aware responses, with the ability to escalate complex issues to a manager agent.This integration enables the creation of more intelligent and personalized AI agents for various applications, such as customer support, virtual assistants, and interactive chatbots.