Deploying ADK agents with MCP on Vertex AI Agent Engine using custom installation scripts
This blog has been co-author with Shawn Yang, Software Engineer, Vertex AI Agent Engine at Google Cloud.
TL;DR:
Vertex AI Agent Engine has a new feature for custom installation scripts, which lets you run shell scripts during your agent’s build process. This enables a new patterns including deploying agents that use external tools via the Model Context Protocol (MCP). This guide provides a walkthrough of the entire workflow, from local development to cloud deployment, by building a Reddit agent that uses this new capability.
Please note: This guide is current as of August 2025. The feature requires google-cloud-aiplatform version 1.101.0 or newer.
The Challenge: Non-Python Dependencies
Building and deploying agents that use tools on Vertex AI is a straightforward process. However, you might face some challenges when a tool isn’t a simple Python function. Integrating compiled binaries, Node.js services, or other non-Python dependencies has been difficult. A common workaround was to use Python’s subprocess module to run apt-get install or curl commands at runtime. This approach is inefficient, adds significant startup latency, and makes dependency management cumbersome.
The introduction of custom installation scripts directly addresses this problem.
Understanding build-time installation
The new approach is focused on the concept of build-time installation. Instead of setting up dependencies every time an agent instance starts (runtime), you define the installation steps in a shell script. Agent Engine executes this script once during the initial build, cooking the dependencies directly into the agent’s container image. This moves setup from a slow, repetitive runtime task to a one-time, optimized build step which would be more reliable.
Most importantly, build-time installation makes it practical to deploy agents with self-contained tool servers, especially those that use the Model Context Protocol (MCP). The Model Context Protocol (MCP) provides a standard interface for an LLM to communicate with external LLM tools, which often run as separate server processes. With custom installation scripts, these servers can be installed and run as part of the agent’s core environment.
Notice: This approach has pros and cons. But it is a first tentative to support MCP on Agent Engine. And we are actually seeking for feedback. Let us know if you thing having MCP support is beneficial.
Let’s see how to use custom installation scripts with an agent on Vertex AI Agent Engine.
Example Agent: A Reddit Assistant
To demonstrate this, we will build, test, and deploy a Reddit-querying agent. This agent uses the mcp-reddit tool, an MCP server for the Reddit API, making it a great example for this deployment pattern. You can find the notebook here.
Step 1: The Custom Installation Script
The core of the workflow is the installation script. This file instructs Vertex AI Agent Engine on how to prepare the environment. For our example, the script will:
- Install uv, a fast Python package manager, to accelerate the package installation step.
- Use uv to install the mcp-reddit tool from its GitHub repository.
We will save this script as installation_scripts/install_local_mcp.sh:
#!/bin/bash
# Exit immediately if a command exits with a non-zero status.
set -e
echo "Installing MCP Reddit Server"
# Install uv (a fast Python package manager)
apt-get update
apt-get install -y curl
echo "Installing uv..."
curl -LsSf https://astral.sh/uv/install.sh | sh
# Add uv to PATH for current session
export PATH="$HOME/.local/bin:$PATH"
# Install the mcp-reddit tool using the command from its documentation
echo "Installing mcp-reddit using uv..."
uv pip install "git+https://github.com/adhikasp/mcp-reddit.git" --system
echo "MCP Reddit Server installation complete."
Step 2: Building and Testing the Agent Locally
Before deploying, we validate the agent locally for faster debugging. We define an LlmAgent using the Google Agent Development Kit (ADK) and, in its tools parameter, we instantiate MCPToolset. We point it to the mcp-reddit command that our installation script makes available.
from google.adk.agents import LlmAgent
from google.adk.tools.mcp_tool.mcp_toolset import MCPToolset, StdioServerParameters
from google.adk.tools.mcp_tool import StdioConnectionParams
def create_agent(errlog):
root_agent = LlmAgent(
model='gemini-2.5-flash',
name='reddit_assistant_agent',
instruction='Help the user fetch reddit info.',
tools=[
MCPToolset(
connection_params=StdioConnectionParams(
server_params=StdioServerParameters(
command="mcp-reddit",
),
),
errlog=errlog, # Required for async logging in Colab
)
],
)
return root_agent
Using the ADK Runner with in-memory services allows us to test the agent’s logic and check it can communicate with the mcp-reddit server.
Step 3: Packaging the Agent with ModuleAgent
A key technical detail for deployment is that the MCPToolset contains non-serializable objects (like thread locks) and cannot be “pickled” for direct transfer to the Agent Engine.
The solution is to use the ModuleAgent class from the Vertex AI SDK. This involves placing the agent’s construction logic into a separate Python file (e.g., root_agent.py). The Agent Engine service imports this module on the server, building the agent within the target cloud environment.
# root_agent.py
import os
from google.adk.agents import LlmAgent
from google.adk.tools.mcp_tool.mcp_toolset import MCPToolset, StdioServerParameters
from google.adk.tools.mcp_tool import StdioConnectionParams
from vertexai.preview.reasoning_engines import AdkApp
# ... service builder functions ...
agent_app = AdkApp(
agent=LlmAgent(
model='gemini-2.5-flash',
# ... agent definition ...
tools=[
MCPToolset(
connection_params=StdioConnectionParams(
server_params=StdioServerParameters(
command="mcp-reddit",
),
timeout=60,
),
)
],
),
# ... service builder ...
)
Step 4: Deploying to Vertex AI Agent Engine
Finally, we deploy the agent using the create method. Note these two parameters:
extra_packages
: This list must include our module file (root_agent.py
) and our installation script (install_local_mcp.sh
). This bundles our logic and setup instructions into the deployment package.build_options
: This parameter is the key to enabling the feature. Its installation list tells Agent Engine which scripts to execute during the build.
We use the ModuleAgent class and provide the module name, the agent variable name, and the registered operations, a dictionary of API modes to a list of method names you can get using the .register_operations method.
from vertexai import agent_engines
remote_app = agent_engines.create(
display_name="reddit_assistant_agent",
description="A Reddit assistant agent with MCP",
agent_engine=agent_engines.ModuleAgent(
module_name="root_agent",
agent_name="agent_app",
register_operations={
"": ["get_session", ...],
"async": [
"async_get_session",
"...
],
"stream": ["stream_query", ...],
"async_stream": ["async_stream_query"],
},
),
requirements=[
# ...
],
extra_packages=[
"root_agent.py",
"installation_scripts/install_local_mcp.sh", # Bundle the script
],
env_vars={
# ... Reddit API keys ...
},
build_options={
"installation": [
"installation_scripts/install_local_mcp.sh", # Execute the script
],
},
)
Once the command completes, the agent lives in Vertex AI as shown below.
When it receives a query, it will launch the mcp-reddit server that was installed via the custom script and return a response using the agent.
# Interact with the deployed agent
chat_loop(remote_app)
# 🚀 Starting chat...
# 👤 User ID: uid_xxxx
# 📁 Session ID: abc_12345_xxxx
# Hello!
# Hi! How are you today? I am Reddit assistant...
Conclusion
The support for custom installation scripts in Vertex AI Agent Engine is a significant new capability for deploying agent on Vertex AI Agent Engine. It provides a mechanism for packaging complex non-Python dependencies, bridging the gap between local agent development and scalable agent deployment. This feature improves the integration of diverse LLM tools, particularly those that use Model Context Protocol (MCP). The pattern shown here can be adapted for other tools, allowing developers to incorporate a wider range of existing binaries and services into their agents.
What’s Next
To start using custom installation scripts with your agents on Vertex AI Agent Engine, check out the following resources:
Thanks for reading! We hope this technical walkthrough was valuable. If you have questions or feedback, connect with us on LinkedIn or X/Twitter. Share your experiences in the Agents community on Discuss.
Happy building!