Unlocking Multi-Agent A2A: How to Connect CrewAI and ADK on Google Cloud

,

This article is co-authored with Neel Mani (@manineel )

The world of AI is buzzing with the potential of multi-agent systems, where specialized AI agents collaborate to tackle complex problems far beyond the scope of a single model. But this rapid progress has hit a roadblock: fragmentation. Agents built with different frameworks like CrewAI, LangGraph, or Google’s Agent Development Kit (ADK) don’t speak the same language. This locks developers into silos and makes true collaboration a messy, frustrating challenge.

Imagine asking a user to manually copy-paste information between three different apps to get one job done. That’s the reality for many AI systems today, with the user acting as a “human API” to bridge the communication gap. This is where the Agent-to-Agent (A2A) protocol comes in—a universal translator designed to break down these barriers and let agents cooperate seamlessly.

Let’s explore how to build a hybrid system where agents from different frameworks work together, all deployed as scalable microservices on Google Cloud Run.

An Architecture for Collaboration

To see this in action, we’ll design a simple application made of three distinct agents:

  1. The Host Agent (ADK): The project manager. This agent acts as the central orchestrator, taking user requests and delegating sub-tasks to the right specialist.
  2. The Research Agent (CrewAI): The investigator. Built with the popular CrewAI framework, this agent’s job is to search the web and gather information on a specific topic.
  3. The Summarizer Agent (ADK): The editor. This ADK-based agent takes the raw research and distills it into a concise, easy-to-read summary.

This microservices-based approach means each agent can be developed, deployed, and scaled independently, offering incredible flexibility.

Make your agents “A2A-ready”

The A2A protocol enables different AI agents to communicate and collaborate. Here are the major components in enabling two agents to communicate using the A2A.

  • A2A Client: The agent that initiates communication and delegates a request.
  • A2A Server: The agent that receives the request, does the work, and responds.
  • Agent Card: A digital resume (JSON file) advertising an agent’s skills and identity.
  • Task: A specific unit of work with a lifecycle for tracking its progress.
  • Message: A single turn in the conversation between agents, like a prompt or reply.
  • Artifact: The final, tangible product created by the agent, like a document or image.
  • Part: The actual content (text, file, or data) inside a message or artifact.

Bringing Agents to Life with Code

The key is to wrap each agent in a simple web server that exposes A2A-compliant endpoints. This involves creating an AgentExecutor class that acts as a bridge between the A2A protocol and your agent’s native logic.

Wrapping a CrewAI Agent

For our CrewAI researcher, the AgentExecutor handles incoming A2A requests, invokes the agent’s core logic, and packages the result into a final A2A artifact.

# researcher_agent_crewai/agent_executor.py

class ResearchAgentExecutor(AgentExecutor):
    """AgentExecutor for the research agent."""
    def __init__(self):
        """Initializes the ResearchAgentExecutor."""
        self.agent = ResearchAgent()
    async def execute(
        self,
        context: RequestContext,
        event_queue: EventQueue,) -> None:
        """Executes the research agent."""
        if not context.task_id or not context.context_id:
            raise ValueError("RequestContext must have task_id and context_id")
        if not context.message:
            raise ValueError("RequestContext must have a message")
        updater = TaskUpdater(event_queue, context.task_id, context.context_id)
        if not context.current_task:
            await updater.submit()
        await updater.start_work()
        if self._validate_request(context):
            raise ServerError(error=InvalidParamsError())
        query = context.get_user_input()
        try:
            result = self.agent.invoke(query)
            print(f"Final Result ===> {result}")
        except Exception as e:
            print(f"Error invoking agent: {e}")
            raise ServerError(error=InternalError()) from e
        parts = [Part(root=TextPart(text=result))]
        await updater.add_artifact(parts)
        await updater.complete()

Setup the Host Agent: Delegating Tasks

The Host Agent uses the send_message tool to communicate to connection agents. When the LLM decides it needs to delegate, it uses this tool to find another agent and send it a task. The host discovers other agents by fetching their AgentCard, which lists their skills and a communication URL.

# host_agent_adk/agent.py
class HostAgent:
    # ... agent setup and discovery ...

    async def send_message(self, agent_name: str, task: str, tool_context: ToolContext):
        """Sends a task to a remote friend agent."""
        if agent_name not in self.remote_agent_connections:
            raise ValueError(f"Agent {agent_name} not found")
        client = self.remote_agent_connections[agent_name]
        if not client:
            raise ValueError(f"Client not available for {agent_name}")
        # Simplified task and context ID management
        state = tool_context.state
        task_id = state.get("task_id", str(uuid.uuid4()))
        context_id = state.get("context_id", str(uuid.uuid4()))
        message_id = str(uuid.uuid4())
        payload = {
            "message": {
                "role": "user",
                "parts": [{"type": "text", "text": task}],
                "messageId": message_id,
                "taskId": task_id,
                "contextId": context_id,
            },
        }
        message_request = SendMessageRequest(
            id=message_id, params=MessageSendParams.model_validate(payload)
        )
        send_response: SendMessageResponse = await client.send_message(message_request)
        print("send_response", send_response)
        
        # ... process the response ...

Deploying to Google Cloud Run

Once the agents are “A2A-ready,” deploying them is straightforward. Each agent is containerized using a Dockerfile and deployed to Google Cloud Run with a single gcloud command. Cloud Run automatically provides a unique and secure HTTPS URL for each agent, which the Host Agent uses to discover and communicate with them.

# Deploy the researcher agent to Cloud Run
# cd researcher-agent 
gcloud run deploy a2a-researcher-crewai \
    --source . \
    --region us-central1 \
    --set-env-vars=HOST_OVERRIDE=<YOUR_UNIQUE_CLOUD_RUN_URL> \
    --allow-unauthenticated

Conclusion

The future of AI is collaborative, not siloed. By using the A2A protocol, you can connect different frameworks like CrewAI and Google’s ADK to build powerful, hybrid multi-agent systems. This unlocks a flexible, microservices-based approach to building and scaling your AI solutions. Ready to start building? Explore the complete source code on our GitHub repository and deploy your own agents today.