Building and Deploying AI Agents with LangChain on Vertex AI

@nadav_w , thanks for the question, this is a common one! In the end, solutions such as Reasoning Engine, LangServe + Cloud Run, and other deployment options are all just different ways of deploying and hosting your agent as Python code. Some developers prefer to work directly with Cloud Run, while other developers prefer to work at a higher-level abstraction such as LangServe on Cloud Run -or- Reasoning Engine. If I’m starting with the agent and building the app around it, I appreciate starting with Reasoning Engine or LangServe. If I’m starting with an app that does more than just interact with an agent, I’ll typically start from Cloud Run. And as your agent and app grow in complexity, you can switch between approaches to make the app / agent more modular and maintainable!

@DanieleV , good question! The answer here is similar to the previous question on deployment, but this time focuses on the developer’s experience building the agent rather than deployment. Reasoning Engine, LangChain, and Agent Console are just different ways of constructing agents at different abstraction levels. If you spend most of your day at the LangChain layer, it might make sense to just use LangChain or LangGraph directly in your code. If you spend most of your day working with Google Cloud SDKs and APIs such as Vertex AI, then you might find Reasoning Engine the easiest to work with. Or if you want to quickly prototype an agent that matches up with the chatbot + RAG approach, then Agent Console is a good starting point. I often prototype simple versions of agents in 2 or more tools to get a feel for which approach will work best for a given use case.

@emerworth , yes! This has been a common feature request in Reasoning Engine and is being worked on. Feel free to open a new feature request on the public issue tracker and point me to it. That way we can learn more about your use case and let you know when it’s ready for testing / usage!