If you’re reading this, you’re probably where I was a few days ago: looking at terms like CopilotKit and AG-UI and wondering why we need a few more new acronyms just to make an AI agent talk to a website.
I’ve been playing around with CopilotKit to build a UI and connecting it to an ADK (Agent Development Kit) agent I’m currently developing, and I finally realized that AG-UI is the protocol behind making AI agents feel like real apps instead of just a text box.
In this guide, I’ll walk you through how you can connect these pieces, specifically how you can wrap a native ADK agent to speak this protocol and build a custom “Agent Debugger” to see exactly what was happening under the hood.
The challenge: Making agents “talk” to UIs
Imagine you’re building a house.
- ADK is the architect and the workers (the “Brain”). It decides what to build and how to do it.
- CopilotKit is the finished house that you can see and touch (the “UI”).
- AG-UI is the walkie-talkie system connecting them.
Without AG-UI, the workers would finish the whole house in silence, and you’d only see it when they’re totally done. With AG-UI, the workers are constantly chirping on the radio: “I’m laying the first brick,” “I’m painting the wall blue,” or “Hey, where do you want this window?”
What exactly is AG-UI?
AG-UI (Agent-User Interaction) is an open standard protocol designed to unify how AI agents communicate with user interfaces.
You might see references to “17 events” in some places and “30+” in others. This is because the protocol is evolving rapidly. The “Core 17” cover the essentials—text streaming, tool calls, and state updates. But as agents get smarter, the protocol expands to include things like “Draft Proposals,” “Human-in-the-loop” interruptions, and “Activity Snapshots.”
Where does CopilotKit fit in?
CopilotKit is a framework built on top of AG-UI. It takes that stream of raw events and gives you:
-
Rich UI Components: Pre-built chat bubbles (
<CopilotChat />) that handle loading states, markdown rendering, and tool calls automatically. -
Headless Hooks: Low-level hooks (like
useCopilotChatoruseAgent) that give you raw access to the data so you can build anything—like the custom debugger you’re about to see.
You can use the AG-UI Dojo to play with this interaction framework interactively.
Before I found this stack, the ADK app I was developing felt “clunky.” Users would ask a question, stare at a loading spinner for 10 seconds, and then—boom—a wall of text. We wanted to show intermediate thoughts and tool usage, but mapping raw ADK events to the frontend was becoming a headache of custom schemas and parsing logic.
I found out AG-UI can solve this by providing a standard set of signals that any backend can speak and any frontend can listen to, and CopilotKit acts as the facilitator to make this happen easily.
Getting started: The quick way
You don’t have to build a whole UI from scratch. You can use the CopilotKit CLI to scaffold a full-stack project that already has the ADK (or other Agentic Framework) backend and Next.js frontend wired up:
npx create-copilotkit-app@latest my-copilot-app --adk
To get the exact baseline I used, follow the instructions on CopilotKit’s ADK Quickstart Guide. From there, I modified the page.tsx, layout.tsx, and backend agent code to create the custom experience I’m describing below.
Understanding the architecture
To make a UI that communicates with our custom agent work, we need to use a specific set of tools to bridge the gap between Google’s ADK and a React frontend:
- The Backend: The native Python ADK agent, wrapped in a translation layer (
ag_ui_adk). - The Middleware: A Next.js API route acting as the Copilot Runtime.
- The Frontend: A React app using CopilotKit to consume the event stream.
1. The backend: Wrapping the ADK agent
We need to use a package called ag_ui_adk which acts as a Protocol Adapter. It intercepts the native behaviors of the ADK agent—like starting a run or emitting a token—and translates them into standard AG-UI events.
Here is the actual code (which we have on a server.py file on our root agent’s source code). Notice how the root_agent is wrapped without changing its internal logic:
from fastapi import FastAPI
from ag_ui_adk import ADKAgent, add_adk_fastapi_endpoint
from data_science_agent.agent import root_agent
import uvicorn
import os
app = FastAPI(title="Data Science Agent - AG-UI Compatible")
# Wrap the existing native ADK agent
# This adapter translates ADK behaviors into AG-UI protocol events
adk_agent_wrapper = ADKAgent(
adk_agent=root_agent,
user_id="demo_user",
session_timeout_seconds=3600,
use_in_memory_services=True,
)
# Expose the wrapped agent on the root path
add_adk_fastapi_endpoint(app, adk_agent_wrapper, path="/")
if __name__ == "__main__":
port = int(os.getenv("PORT", 8080))
uvicorn.run(app, host="0.0.0.0", port=port)
2. The middleware: The API route
Next, we need to tell the frontend where to find this agent. For this, we can set up a Next.js API route (src/app/api/copilotkit/route.ts) to act as the gateway. This handles the secure connection to the Python backend.
import { CopilotRuntime, ExperimentalEmptyAdapter, copilotRuntimeNextJSAppRouterEndpoint } from "@copilotkit/runtime";
import { HttpAgent } from "@ag-ui/client";
import { NextRequest } from "next/server";
const runtime = new CopilotRuntime({
agents: {
// We register our python agent here using the HttpAgent adapter
"data_science_agent": new HttpAgent({ url: "http://localhost:8080/" }),
}
});
export const POST = async (req: NextRequest) => {
const { handleRequest } = copilotRuntimeNextJSAppRouterEndpoint({
runtime,
serviceAdapter: new ExperimentalEmptyAdapter(),
endpoint: "/api/copilotkit",
});
return handleRequest(req);
};
3. The frontend: Global context
Finally, we need to wrap the application in the <CopilotKit> provider in src/app/layout.tsx. This exposes the agent to every component in the app.
<CopilotKit
publicLicenseKey="<YOUR_KEY>"
runtimeUrl="/api/copilotkit"
agent="data_science_agent" // Must match the key in route.ts
>
{children}
</CopilotKit>
Visualizing the “Matrix”: Building an agent debugger
This is where things started to get interesting. I wanted to see exactly what was happening under the hood—not just the polished chat bubbles, but the raw signals. So, I decided to build a custom Agent Debugger to visualize the raw event stream.
In CopilotKit v1.5+, you can use the useAgent hook to subscribe directly to these events.

In the GIF above, you can see two different views of the same agent session. In the lower level, the custom Developer Console that captures and displays the raw AG-UI events as they stream in real-time. I built this console just to demonstrate that you have full access to the “under-the-hood” signals CopilotKit receives. You can catch these same events to trigger any custom UI behavior you can imagine—like updating a live dashboard, an animated model, or just logging for transparency. On the upper-level window you can see the developer tools provided natively by CopilotKit, where I first noticed that CopilotKit was indeed receiving the agent’s AG-UI standard events in realtime.
A technical discovery: During implementation, I initially assumed the events were wrapped in a payload object. After some debugging (and wondering why my logs were empty!), I realized that the raw events emitted by the hook are flat objects. The data needed, like delta or toolCallId, sits directly on the event object.
The custom debugger implementation
Here is the code of the AgentDebugger.tsx, the custom debugger UI you saw above. It subscribes to the stream and handles the flat event structure.
"use client";
import React, { useState, useEffect } from "react";
import { useAgent } from "@copilotkitnext/react";
export function AgentDebugger() {
const [events, setEvents] = useState<any[]>([]);
const { agent } = useAgent({
agentId: "data_science_agent"
});
useEffect(() => {
if (agent) {
const subscriber = {
// Catch-all handler for the raw stream
onEvent: ({ event }: { event: any }) => {
// Enrich with local timestamp if missing
const enrichedEvent = {
...event,
_receivedAt: Date.now()
};
setEvents((prev) => [enrichedEvent, ...prev]);
}
};
const subscription = agent.subscribe(subscriber);
return () => subscription.unsubscribe();
}
}, [agent]);
return (
<div className="flex flex-col gap-2 h-96 overflow-y-auto bg-slate-50 p-4 rounded border">
{events.map((event, idx) => (
<GenericEventCard key={event.id || idx} event={event} />
))}
</div>
);
}
Rendering the event card
To visualize this, I built a GenericEventCard that color-codes the events (mirroring the colors of CopilotKit’s events, just for aesthetic purposes, no other reason—“Blue for Text,” “Purple for Tools,” and “Green for State”).
function GenericEventCard({ event }: { event: any }) {
// Note: Data is accessed DIRECTLY on the event object. There is no 'payload' wrapper!
const { type, agentId } = event;
const timestamp = event.timestamp || event._receivedAt;
// ... (styling logic omitted for brevity)
return (
<div className="border border-gray-200 rounded p-3 text-sm bg-white font-mono">
{/* Event Header */}
<div className="flex justify-between mb-2">
<span className={`font-bold uppercase ${getTypeColor(type)}`}>{type}</span>
<span className="text-xs text-gray-500">{agentId}</span>
</div>
<div className="text-xs text-gray-700 overflow-x-auto break-all">
{/* Text Streaming */}
{type === "TEXT_MESSAGE_CONTENT" && (
<div><span className="font-bold text-blue-400">Delta: </span>"{event.delta}"</div>
)}
{/* Tool Args */}
{type === "TOOL_CALL_ARGS" && (
<div><span className="font-bold text-purple-600">Arg Delta: </span>{event.delta}</div>
)}
{/* Raw View */}
<details className="mt-2 text-[10px] text-gray-400">
<summary className="cursor-pointer">Raw Event</summary>
<pre className="bg-slate-50 p-2 mt-1">{JSON.stringify(event, null, 2)}</pre>
</details>
</div>
</div>
);
}
Key takeaways
- Decoupling: Using AG-UI means you can swap the backend logic (e.g., from LangGraph to ADK, CrewAI or vice versa) without breaking your frontend. The “walkie-talkie signals” remain the same (that’s why protocols rule).
- Streaming is Key: Ensure your backend emits events as they happen. The magic of a responsive UI comes from handling
RUN_STARTEDandTOOL_CALL_STARTimmediately, not waiting for the final response.
I’m still learning about all this, but gaining visibility into the “brain” of my agent through these standardized events has completely changed how I build the application itself. It turns a “black box” into a transparent, interactive system.
Want to learn more?
Here are the resources I used to get up to speed with these tools. They were super helpful for understanding both the high-level concepts and the nitty-gritty implementation details:
-
AG-UI Event Concepts: The official documentation explaining the event-driven architecture.
-
Google ADK: Third-Party Tools (AG-UI): Google’s guide on how ADK officially supports and integrates with the AG-UI protocol.
-
Building a Frontend for ADK Agents: A practical tutorial from CopilotKit on connecting these two worlds.
-
Mastering the 17 AG-UI Event Types: A deep dive into the core events that make up the protocol—essential reading if you want to build custom handlers.
Let’s connect!
I hope this guide helps you unbox your own agents and build cool UIs for them in a simpler way. If you found this helpful, or if you’re building something cool with this stack, I’d love to hear about it.
Feel free to reach out or send me your feedback on LinkedIn. Happy coding!


