A Practical Guide to Production-Ready Agentic Workflows with ADK and Agent Engine


TL;DR: This article guides you through building a practical agentic workflow. First, define specialized agents by breaking down a complex task into real-world roles (e.g., Researcher, Writer, Reviewer). Once prototyped locally (e.g., with ADK), you can deploy this workflow as a scalable, production-ready system of remote microservices using Agent Engine’s Agent-to-Agent (A2A) architecture.


Agentic workflows, where multiple AI agents collaborate to solve complex problems, are quickly moving from theory to powerful, practical applications. This approach breaks a big task into smaller pieces, letting specialized agents handle different parts of a process, such as research, writing, and review.

But how do you go from a local prototype to a scalable, production-ready system? This article outlines a two-step process:

  1. Designing a realistic, multi-agent workflow.
  2. Deploying it as a remote Agent-to-Agent (A2A) architecture using Agent Engine.

Step 1: Design a Realistic Agentic Workflow

Before writing any code, the most important step is designing the workflow itself. How should you divide the tasks? What agents do you need?

A straightforward and effective initial approach is to divide agents based on real-world job roles.

Strategy 1: Divide Agents by Real-World Roles

Let’s use “creating an online article” as our example. A realistic business workflow isn’t just one step. It involves several roles:

  • (1) User: Inputs the article’s theme.
  • (2) Research Agent: Gathers information and creates a research report.
  • (3) Writer Agent: Creates the article based on the research report.
  • (4) Reviewer Agent: Reviews the article for quality and policy compliance.
  • (5) User: Decides if revisions are needed (looping back to step 3) or if the process is complete.

This role-based division is effective because real-world roles often require different, sometimes conflicting, skills.

  • A Writer should be creative and explore fresh perspectives.
  • A Reviewer must be conservative and strictly check content against quality standards and policies.

It’s hard for a single person (or agent) to excel at both. By separating the Writer Agent and Reviewer Agent, each can be fine-tuned for its specific job. This “separation of concerns” also makes your agents more reusable. You could swap in different “Writer Agents” with various personalities or use a “Reviewer Agent” specialized in a specific topic.

Strategy 2: Subdivide Agents by Work Steps

You can further refine your design by subdividing agents based on their specific work steps, similar to a microservice architecture.

For example, the “Research Agent’s” job could be broken down further, inspired by “deep research” methodology:

  • Agent 2a: Topic Selector: First, select the most useful research topics for the article.
  • Agent 2b: Topic Compiler: Second, compile the research findings for each selected topic.

For a simple demo, a single agent might handle both tasks. But as you add complex features (like customizing research topics), subdividing agents by work steps becomes highly effective. It’s often best to start simple and consider subdivision as your agent’s functionality grows.

Finalizing the Workflow

Once your agents are defined, map out the entire process.

  1. Define Clear Tasks: Be specific about each agent’s job. What is its exact input? What is its expected output? If you can’t define this, you can’t build the agent.
  2. Map the Flow: Organize the order of tasks. Note where the process is linear (A → B → C) and where it branches.
  3. Identify Branching Types: Branching typically happens in two ways:
    • Human-in-the-Loop (HITL): A human makes a decision (e.g., “approve article” or “request revisions”).
    • Automated Logic: The flow changes based on a predetermined condition (e.g., “if the review score is below 80, send back to writer”).

In our example, the workflow looks like the diagram below.

With this diagram, you’re ready to prototype. Using a framework like the Agent Development Kit (ADK), you could implement this by defining each agent’s task (LlmAgent), grouping linear steps (Sequential Agent), and managing the overall flow and human-in-the-loop branching (Root Agent).

For a concrete implementation of this local workflow, see the first notebook:

Step 2: Deploy as Remote A2A Agents with Agent Engine

Running a multi-agent system as a local script is great for development. A production system, however, needs to be scalable, robust, and easy to maintain.

This is where Agent Engine comes in. It allows you to deploy your workflow as a remote Agent-to-Agent (A2A) architecture.

Instead of existing in one codebase, each agent becomes its own independent, remote service. This “microservice” approach has huge advantages:

  • Decoupled Services: Each agent (Research, Writer, Reviewer) is its own service. You can develop, update, and deploy them independently.
  • A2A Communication: Agents communicate with each other through API calls or messaging queues managed by Agent Engine, replacing the local function calls from the prototype.
  • Scalability & Resilience: Is the “Research Agent” slow? Scale it up without touching the “Writer Agent.” If the “Reviewer Agent” fails, it can be restarted without bringing down the entire workflow.

This A2A architecture transforms your agentic workflow from a monolithic script into a distributed system of specialized AI services ready for enterprise-grade applications.

The Easy Transition with RemoteA2aAgent

The ADK provides a powerful feature called RemoteA2aAgent to make this transition seamless.

It acts as a local proxy for your remote agent. Behind the scenes, it handles all the A2A communication. In your code, you can simply replace the locally defined agent with the RemoteA2aAgent to use the remote A2A-deployed agent transparently.

Let’s see just how simple this transition is with a concrete example.

First, here is how you might define the local version of our research agent (the one that selects topics) using ADK’s LlmAgent. You would then assign this research_agent1 object to the sub_agents list of your root agent (main orchestrating agent).

instruction = '''
Your role is to gather the necessary information for writing an article and compile it into a research report.
You will create a list of about 5 topics to be used as a reference when writing an article on a specified theme.
A subsequent agent will create the research report based on this list.
'''

research_agent1 = LlmAgent(
    name='research_agent1',
    model='gemini-2.5-flash',
    description='''
An agent that gathers the necessary information for writing an article
and compiles it into a report (theme selection)
    ''',
    instruction=instruction,
)

Now, let’s say you’ve deployed that exact agent as a remote service on Agent Engine. To use it in your workflow, you don’t need to rewrite your root agent’s logic. You simply define a RemoteA2aAgent proxy for it, like this:

research_agent1_remoteA2a = RemoteA2aAgent(
    name='research_agent1',
    description='''
An agent that gathers the necessary information for writing an article
and compiles it into a report (theme selection)
    ''',
    agent_card=f'{a2a_url}/v1/card', # a2a_url stores URL of A2A server.
    a2a_client_factory=factory,
)

The key takeaway is that your root agent’s code doesn’t change. You can assign research_agent1_remoteA2a to the same sub_agents list just as you did with the local version. The RemoteA2aAgent handles all the complex remote communication, making your local and remote workflows look nearly identical from the orchestrator’s point of view.

Implementation Example

This article provides the high-level concept. For a detailed, hands-on implementation that shows exactly how to expand a local agentic workflow into a remote A2A architecture using Agent Engine, refer to the following notebook:

Conclusion

Moving from a single-prompt AI to a collaborative multi-agent system represents a significant leap in capability. As this article has shown, the path from a local prototype to a production-ready application is clear and achievable.

It begins with thoughtful design—breaking down complexity by modeling agents on specialized, real-world roles to create a modular and reusable system. It then transitions to robust deployment, using an Agent-to-Agent (A2A) architecture with Agent Engine to build scalable, decoupled, and resilient applications.

This two-step pattern of designing by role and deploying as remote services gives you a powerful framework for building the next generation of sophisticated AI applications. We encourage you to explore the notebooks and start building your own agentic workflows today.

3 Likes