Authors
Introduction: Bridging agents and your data
Large language models are incredibly powerful, but their knowledge is often static and disconnected from your business’s real-time, proprietary data. To build truly useful AI agents, you need to bridge this gap, allowing them to securely interact with live information stored in your databases. This is where the combination of Vellum and Google’s MCP Toolbox for Databases creates a powerful and secure solution.
Vellum is a platform designed for building production-grade, scalable AI applications. Its visual workflow builder and robust tooling make it easy to design, test, and deploy complex AI systems. A core component of this is the Agent Node, which enables the creation of sophisticated agents that can reason, plan, and use external tools to accomplish tasks.
Google’s MCP Toolbox for Databases is an open-source MCP server that acts as a secure bridge between your AI agent and your database. As described on its official GitHub repository, it exposes database operations as tools using the Model-Context Protocol (MCP). Instead of letting an LLM generate and execute arbitrary SQL—a significant security risk—the toolbox allows you to define a set of safe, pre-approved actions that the agent can call upon.
In this guide, we’ll walk through how to use these two technologies together to create an AI agent that can query and modify a database in a structured and secure way.
Practical use-cases for database-connected agents
Before diving into the technical details, let’s consider what you can build with this integration. The possibilities are vast and can transform how users and internal teams interact with data.
-
Intelligent Customer Support: Build a support bot that can look up a customer’s order status, check product inventory levels, or retrieve account details directly from your production database, providing instant and accurate answers.
-
Internal Data Tools: Empower non-technical teams to query business data using natural language. A sales team could ask, “Show me all leads in the software industry in California,” and the agent would translate this into a safe database query.
-
Booking and Reservation Systems: Create an assistant that can search for available hotel rooms, flights, or appointments. As demonstrated in the source material, a user can find a hotel and then say, “Book Holiday Inn Hotel,” triggering a database update to complete the reservation.
-
Data Analysis Assistants: Develop agents that can perform simple data lookups and aggregations to answer business intelligence questions on the fly, without needing to write complex SQL queries or use a dedicated BI tool.
Why run MCP servers on Google Cloud?
When you go from testing to real production agents, how you host your MCP servers really matters. Running them on a laptop or VM is fine for demos, but not for serious workloads. Google Cloud gives you built-in security, and paired with Vellum’s compliance features, you get a safe setup for building enterprise-ready agents.
Stronger Authentication & Identity: With Google Cloud IAM, you can control exactly who or what can call your MCP server. Instead of juggling API keys or environment tokens, you can bind access to service accounts, enforce least-privilege roles, and rotate credentials automatically. On top of that, Vellum ensures that every workflow execution is fully auditable, with permissions and access checks aligned to compliance requirements.
Network Isolation & Compliance: On GCP, your MCP server can live inside a private VPC, invisible to the public internet. You can expose it only through secure HTTPS gateways, and take advantage of built-in DDoS protection. Vellum’s SDK could also be run locally, while keeping your whole setup in GCP and your own environment.
Observability & Safety Nets: Every tool call routed through Google Cloud can be logged, monitored, and audited. Developers get Cloud Logging, metrics, and alerts without wiring up custom dashboards. Combined with Vellum’s evaluation and monitoring features, you can trace each agent decision, validate outcomes, and enforce safe tool usage.
Understanding the components
The integration relies on three key components working in concert: the MCP server, a configuration file, and Vellum’s Agent Node.
1. Google’s MCP toolbox server
The toolbox server is a lightweight service you run locally or on your infrastructure. Its primary job is to listen for incoming tool-call requests from an AI agent. When it receives a request, it validates it against a set of predefined tools and, if valid, executes the corresponding SQL statement against your database. This creates a secure boundary, ensuring the LLM can only perform actions you have explicitly allowed.
2. The tools.yaml configuration file
This YAML file is the heart of the MCP Toolbox setup. It’s where you define your database connection and the specific tools the agent can use. The structure is straightforward:
-
sources: This section contains the connection details for your database(s), such as host, port, and credentials.
-
tools: Here, you define each individual action. For each tool, you specify its name, a natural language description (which is critical for the LLM to understand when to use the tool), the expected parameters, and the exact SQL statement to execute.
By defining tools this way, you maintain complete control over how your database is accessed.
3. Vellum’s agent node
Within your Vellum workflow, the Agent Node acts as the agent’s brain. It receives the user’s prompt, accesses the list of available tools from the MCP server, and decides which tool (if any) is appropriate to call. It then formulates the request with the necessary parameters and sends it to the MCP server’s endpoint. After the tool executes, the result is sent back to the Agent Node, which uses it to formulate a final response to the user.
Demo: Integrating MCP into a Vellum agent
This demo showcases the integration between Vellum’s workflow and Google’s MCP Toolbox, allowing you to create AI agents that can interact with databases through MCP Toolbox tools.
With this demo we’ll build a simple hotel booking assistant.
Step 1: Set up your database and MCP toolbox
First, you’ll need a database. This example uses PostgreSQL. Follow the initial setup instructions in the MCP Toolbox documentation to set up your database and install the toolbox CLI.
Step 2: Configure your tools
Create a tools.yaml file to define how the agent can interact with your hotel database. This file will specify the database connection and define tools for searching and booking hotels.
sources:
my-pg-source:
kind: postgres
host: <your-host>
port: 5432
database: postgres
user: postgres
password: <your-password>
tools:
search-hotels-by-name:
kind: postgres-sql
source: my-pg-source
description: Search for hotels based on name.
parameters:
- name: name
type: string
description: The name of the hotel.
statement: SELECT * FROM hotels WHERE name ILIKE '%' || $1 || '%';
search-hotels-by-location:
kind: postgres-sql
source: my-pg-source
description: Search for hotels based on location.
parameters:
- name: location
type: string
description: The location of the hotel.
statement: SELECT * FROM hotels WHERE location ILIKE '%' || $1 || '%';
book-hotel:
kind: postgres-sql
source: my-pg-source
description: >-
Book a hotel by its ID. If the hotel is successfully booked, returns a NULL, raises an error if not.
parameters:
- name: hotel_id
type: string
description: The ID of the hotel to book.
statement: UPDATE hotels SET booked = B'1' WHERE id = $1;
update-hotel:
kind: postgres-sql
source: my-pg-source
description: >-
Update a hotel's check-in and check-out dates by its ID. Returns a message
indicating whether the hotel was successfully updated or not.
parameters:
- name: hotel_id
type: string
description: The ID of the hotel to update.
- name: checkin_date
type: string
description: The new check-in date of the hotel.
- name: checkout_date
type: string
description: The new check-out date of the hotel.
statement: >-
UPDATE hotels SET checkin_date = CAST($2 as date), checkout_date = CAST($3
as date) WHERE id = $1;
cancel-hotel:
kind: postgres-sql
source: my-pg-source
description: Cancel a hotel by its ID.
parameters:
- name: hotel_id
type: string
description: The ID of the hotel to cancel.
statement: UPDATE hotels SET booked = B'0' WHERE id = $1;
toolsets:
my-toolset:
- search-hotels-by-name
- search-hotels-by-location
- book-hotel
- update-hotel
- cancel-hotel
Step 3: Run the toolbox server
With your configuration file ready, start the MCP server from your terminal. This command points the toolbox to your definitions and starts the local service.
toolbox --tools-file "tools.yaml"
The server will now be running and listening for requests, typically on http://localhost:8080.
Step 4: Configure the Vellum Agent node
To build the Agent, we’ll rely on Vellum’s Workflow SDK. Here are our recommended installation steps. Once you verify the installation of the SDK you can continue with the rest of the steps below.
The code below defines the Agent node that our final AI workflow (MCPToolboxWorkflow) will be wired to. Here’s how it works:
-
The user provides a query (like “Book the Hotel Frenzy for me”).
-
The prompt has two messages:
-
General instructions in a system message: “You are a helpful assistant…”
-
User message: (whatever they typed).
-
-
The gemini-2.5-pro model sees the prompt and decides how to respond.
-
If the query requires a database connection, the model will call the toolbox MCP server, which executes the real database action.
-
The result is returned, and the node outputs both text and chat history (as defined in the workflow).
from vellum.client.types.chat_message_prompt_block import ChatMessagePromptBlock
from vellum.client.types.plain_text_prompt_block import PlainTextPromptBlock
from vellum.client.types.rich_text_prompt_block import RichTextPromptBlock
from vellum.client.types.variable_prompt_block import VariablePromptBlock
from vellum.workflows.nodes.displayable.tool_calling_node import ToolCallingNode
from vellum.workflows.types.definition import MCPServer
from ..inputs import Inputs
class Agent(ToolCallingNode):
"""
A tool calling node that uses the MCP Documents server
"""
ml_model = "gemini-2.5-pro"
blocks = [
ChatMessagePromptBlock(
chat_role="SYSTEM",
blocks=[
RichTextPromptBlock(
blocks=[
PlainTextPromptBlock(
text="You are a helpful assistant. Use the tools provided to you to answer the user's question.",
),
],
),
],
),
ChatMessagePromptBlock(
chat_role="USER",
blocks=[
VariablePromptBlock(
input_variable="query",
),
],
),
]
functions = [
MCPServer(
name="toolbox",
url="http://127.0.0.1:5000/mcp",
),
]
prompt_inputs = {
"query": Inputs.query,
}
Then to invoke the Agent, within our workflow, all we need to do is have a workflow.py file like this one:
from vellum.workflows import BaseWorkflow
from vellum.workflows.state.base import BaseState
from .inputs import Inputs
from .nodes.agent_node import Agent
class MCPToolboxWorkflow(BaseWorkflow[Inputs, BaseState]):
"""
An example workflow that uses the built-in ToolCallingNode with MCP Server.
It interacts with the Github MCP Server to manage the user's GitHub account.
To run this demo:
- Create a Github Access Token and export it as the environment variable `GITHUB_PERSONAL_ACCESS_TOKEN`
- Run this workflow: `python -m examples.workflows.mcp_tool_calling_node_demo.chat`
"""
graph = Agent
class Outputs:
text = Agent.Outputs.text
chat_history = Agent.Outputs.chat_history
In this workflow you “pass in an MCP server which connects to my local MCP server… which is the hosted endpoint on my computer.” Vellum will automatically fetch the available tools (search-hotels-by-location, book-hotel) and their descriptions from your server, making them available to the agent.
Step 5: Include a chat.py script for a command-line interface
The example repository includes a chat.py script that provides a simple command-line interface:
from dotenv import load_dotenv
from .inputs import Inputs
from .workflow import MCPToolboxWorkflow
def main():
load_dotenv() # load your vellum api key
workflow = MCPToolboxWorkflow()
while True:
query = input("Hi! I'm an your hotel booking assistant. What can I do for you today? ")
inputs = Inputs(query=query)
terminal_event = workflow.run(inputs=inputs)
if terminal_event.name == "workflow.execution.fulfilled":
print("Answer:", terminal_event.outputs["text"])
elif terminal_event.name == "workflow.execution.rejected":
print("Workflow rejected", terminal_event.error.code, terminal_event.error.message)
if __name__ == "__main__":
main()
Step 6: Run the workflow and interact with the agent
The example repository includes a chat.py script that provides a simple command-line interface. Run it with the following command:
python -m examples.workflows.mcp_tool_calling_node_demo.chat
Now you can chat with your database-connected agent:
You: Find hotels in Zurich
The agent will identify the intent, call the search-hotels-by-location tool via the MCP server, and return a list of matching hotels from your database.
You: Book Holiday Inn Hotel
The agent will use the hotel’s ID from the previous search, call the book-hotel tool, and the MCP server will execute the UPDATE statement. The agent will then confirm the action.
Agent: It has been booked.
You can verify in your database that the ‘booked’ status for that hotel has been updated, confirming the entire workflow was successful.
Conclusion: A powerful paradigm for AI applications
Connecting AI agents to live databases opens up a new frontier for building intelligent, useful applications. The combination of Vellum’s powerful agent orchestration and Google’s secure MCP Toolbox provides a robust, developer-friendly framework for doing so.
This approach offers significant benefits:
-
Security: You avoid the risks of direct SQL generation by the LLM, instead relying on a predefined allowlist of actions.
-
Maintainability: Your database logic is cleanly defined and version-controlled in the tools.yaml file, separate from your agent’s prompting logic.
-
Power: You can ground your AI agents in real-time, factual data, enabling them to perform meaningful tasks in the real world.
By following this pattern, you can start building the next generation of data-aware AI applications today.