Long-term memory for agents?

I am trying to understand how Vertex AI handles long-term memory for agents?

Any best practices for managing and persisting conversational context across multiple sessions? The purpose is to ensure the user gets a personalized and coherent user experience.

4 Likes

You can manage to store those conversations in some shorts of DB as your wish and retrieve using RAG or normal queries. It’s UpTo your skill level and needs for the application.

But the best case store conversations correctly with the context you need in future use and retrieve those past conversations and pass it to the Model you’re using.

2 Likes

Hi @Benjen_Bolton, great question! While the RAG approach is still viable, we recently released a managed memory, Vertex AI Memory Bank. Here you can find a tutorial on how to get started.

Happy to answer additional question you might have.

Best

3 Likes

Hi @Benjen_Bolton,

Vertex AI enables long-term memory for agents through its Memory Bank, which stores user-specific information across sessions to ensure personalized and coherent interactions. It works alongside session and state management sessions track events, state holds temporary data, and memory retains persistent facts like preferences or goals.

To manage context effectively, start each conversation with a unique session ID and use session state for short-term data. Extract key memories automatically with GenerateMemories or manually with CreateMemory, then retrieve relevant ones using semantic search to enrich responses. This avoids prompt stuffing and keeps interactions focused.

Best practices include summarizing and prioritizing useful memories, securing data against prompt injection, and complying with privacy standards. For implementation, Google’s Agent Development Kit (ADK) offers tools like load_memory and supports flexible deployment via Docker or REST APIs.

1 Like