Hi all
I’m building a custom AI agent on Vertex AI Agent Engine using LangGraph. The agent graph is fairly large (dozens of nodes across multiple subgraphs). I’ve implemented both synchronous and asynchronous endpoints (query / async_query), and deployment on Vertex AI Agent Engine is working as expected.
Use case: the agent is not interactive; it’s a long-running background job. I’d like to trigger a run (from an external event), receive a job ID and have it execute in the background. The job ID would be used to check status and retrieve results later.
For reference:
-
Vertex AI custom agent docs: Develop a custom agent | Generative AI on Vertex AI | Google Cloud
-
LangGraph Platform background runs: https://docs.langchain.com/langgraph-platform/background-run
Question:
Does Vertex AI Agent Engine provide a built-in mechanism for background/non-interactive runs that return a job ID/thread information (similar to LangGraph Platform’s “background run” feature)? If so, what API/resource should I use, and how is status/result retrieval typically handled?
Thanks in advance.