Google ADK
Integrate Orq.ai with Google Agent Development Kit using OpenTelemetry
Getting Started
Google Agent Development Kit (ADK) is a flexible, model-agnostic framework for developing and deploying AI agents. ADK simplifies building complex agent architectures with workflow orchestration, tool integration, and multi-agent collaboration. Integrate with Orq.ai to monitor agent behavior, track tool usage, analyze workflows, and optimize your agent systems.
Prerequisites
Before you begin, ensure you have:
- An Orq.ai account and API key
- Google API key (for model access)
- Python 3.8+ or Java 11+
- Google ADK installed in your project
Install Dependencies
Python:
# Google ADK
pip install google-adk
# OpenTelemetry packages
pip install opentelemetry-sdk opentelemetry-exporter-otlp
# OpenInference instrumentation for ADK (if available)
pip install openinference-instrumentation-google-adk
Java:
<!-- For Maven -->
<dependency>
<groupId>com.google.ai</groupId>
<artifactId>google-adk</artifactId>
<version>latest</version>
</dependency>
// For Gradle
implementation 'com.google.ai:google-adk:latest'
Configure Orq.ai
Set up your environment variables to connect to Orq.ai's OpenTelemetry collector:
Unix/Linux/macOS:
export OTEL_EXPORTER_OTLP_ENDPOINT="https://api.orq.ai/v2/otel"
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer <ORQ_API_KEY>"
export OTEL_RESOURCE_ATTRIBUTES="service.name=google-adk-app,service.version=1.0.0"
export GOOGLE_API_KEY="your-google-api-key"
Windows (PowerShell):
$env:OTEL_EXPORTER_OTLP_ENDPOINT = "https://api.orq.ai/v2/otel"
$env:OTEL_EXPORTER_OTLP_HEADERS = "Authorization=Bearer <ORQ_API_KEY>"
$env:OTEL_RESOURCE_ATTRIBUTES = "service.name=google-adk-app,service.version=1.0.0"
$env:GOOGLE_API_KEY = "your-google-api-key"
Using .env file:
OTEL_EXPORTER_OTLP_ENDPOINT=https://api.orq.ai/v2/otel
OTEL_EXPORTER_OTLP_HEADERS=Authorization=Bearer <ORQ_API_KEY>
OTEL_RESOURCE_ATTRIBUTES=service.name=google-adk-app,service.version=1.0.0
GOOGLE_API_KEY=your-google-api-key
Integrations
Choose your preferred OpenTelemetry framework for collecting traces:
OpenInference
Best for: Comprehensive ADK instrumentation with automatic tool and model tracking
pip install openinference-instrumentation-google-adk
from openinference.instrumentation.google_adk import GoogleADKInstrumentor
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.sdk.resources import Resource
import os
# Initialize OpenTelemetry
resource = Resource.create({
"service.name": os.getenv("OTEL_SERVICE_NAME", "google-adk-app"),
"service.version": "1.0.0"
})
provider = TracerProvider(resource=resource)
processor = BatchSpanProcessor(
OTLPSpanExporter(
endpoint=os.getenv("OTEL_EXPORTER_OTLP_ENDPOINT"),
headers={"Authorization": f"Bearer {os.getenv('ORQ_API_KEY')}"}
)
)
provider.add_span_processor(processor)
trace.set_tracer_provider(provider)
# Instrument Google ADK
GoogleADKInstrumentor().instrument()
# Now use ADK normally - all agent operations will be traced
import google.adk as adk
# Your ADK agent code here
Manual OpenTelemetry
Best for: Full control over tracing and custom instrumentation
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
import google.adk as adk
import os
# Initialize OpenTelemetry
resource = Resource.create({
"service.name": os.getenv("OTEL_SERVICE_NAME", "google-adk-app"),
"service.version": "1.0.0"
})
provider = TracerProvider(resource=resource)
processor = BatchSpanProcessor(OTLPSpanExporter())
provider.add_span_processor(processor)
trace.set_tracer_provider(provider)
tracer = trace.get_tracer("google-adk")
# Wrapper for tracing agent operations
def trace_agent_run(agent, input_data):
"""Run an agent with OpenTelemetry tracing"""
with tracer.start_as_current_span("adk.agent.run") as span:
try:
# Add attributes
span.set_attributes({
"agent.type": type(agent).__name__,
"agent.input": str(input_data)[:200], # First 200 chars
"agent.model": getattr(agent, "model", "unknown")
})
# Run the agent
result = agent.run(input_data)
# Track success
span.set_attribute("agent.success", True)
span.set_attribute("agent.output.length", len(str(result)))
return result
except Exception as e:
span.record_exception(e)
span.set_attribute("agent.success", False)
raise
Examples
Basic LLM Agent
import google.adk as adk
from google.adk.agents import LLMAgent
# Create a simple LLM agent
agent = LLMAgent(
model="gemini-pro",
system_prompt="You are a helpful assistant that answers questions concisely."
)
# Run the agent with tracing
response = agent.run("What is the capital of France?")
print(response)
Workflow Agent with Sequential Steps
from google.adk.agents import WorkflowAgent, SequentialAgent
from google.adk.tools import WebSearchTool, SummaryTool
# Create a workflow with sequential steps
workflow = SequentialAgent([
WebSearchTool(),
SummaryTool()
])
# Create workflow agent
agent = WorkflowAgent(
workflow=workflow,
model="gemini-pro"
)
# Execute workflow
result = agent.run("Find recent news about renewable energy and summarize")
print(result)
Parallel Workflow Agent
from google.adk.agents import ParallelAgent
from google.adk.tools import WebSearchTool, WikipediaTool, NewsTool
# Create parallel workflow for multi-source research
parallel_agent = ParallelAgent([
WebSearchTool(),
WikipediaTool(),
NewsTool()
])
# Execute parallel research
results = parallel_agent.run("climate change impacts")
for source, data in results.items():
print(f"{source}: {data[:200]}...")
Multi-Agent System
from google.adk.agents import MultiAgent, LLMAgent
# Create specialized agents
researcher = LLMAgent(
model="gemini-pro",
system_prompt="You are a research specialist. Find and analyze information."
)
writer = LLMAgent(
model="gemini-pro",
system_prompt="You are a technical writer. Create clear documentation."
)
reviewer = LLMAgent(
model="gemini-pro",
system_prompt="You are an editor. Review and improve content."
)
# Create multi-agent system
multi_agent = MultiAgent({
"research": researcher,
"write": writer,
"review": reviewer
})
# Orchestrate multi-agent workflow
task = "Create a technical guide about quantum computing"
research_data = multi_agent.agents["research"].run(f"Research: {task}")
draft = multi_agent.agents["write"].run(f"Write based on: {research_data}")
final = multi_agent.agents["review"].run(f"Review and improve: {draft}")
print(final)
Custom Tool Integration
from google.adk.tools import Tool
from google.adk.agents import LLMAgent
class WeatherTool(Tool):
"""Custom tool for weather information"""
def __init__(self):
super().__init__(
name="weather",
description="Get current weather for a location"
)
def run(self, location: str) -> str:
# Simulated weather API call
return f"The weather in {location} is sunny and 72°F"
# Create agent with custom tool
agent = LLMAgent(
model="gemini-pro",
tools=[WeatherTool()]
)
response = agent.run("What's the weather like in San Francisco?")
print(response)
Loop Agent for Iterative Tasks
from google.adk.agents import LoopAgent
# Create loop agent for iterative refinement
loop_agent = LoopAgent(
agent=LLMAgent(model="gemini-pro"),
max_iterations=3,
condition=lambda x: "satisfactory" not in x.lower()
)
# Run iterative improvement
result = loop_agent.run(
"Generate a product description for a smartwatch. "
"Refine until it's satisfactory."
)
print(result)
Agent with Memory
from google.adk.agents import LLMAgent
from google.adk.memory import ConversationMemory
# Create agent with conversation memory
memory = ConversationMemory(max_turns=10)
agent = LLMAgent(
model="gemini-pro",
memory=memory,
system_prompt="You are a helpful assistant with memory of our conversation."
)
# Have a conversation
agent.run("My name is Alice and I work in robotics")
agent.run("What field do I work in?") # Agent remembers context
Error Handling with Tracing
from opentelemetry import trace
tracer = trace.get_tracer("google-adk-error-handling")
def safe_agent_execution(agent, task):
"""Execute agent with comprehensive error handling and tracing"""
with tracer.start_as_current_span("adk.safe_execution") as span:
try:
# Set task context
span.set_attributes({
"task.description": task[:100],
"agent.type": type(agent).__name__
})
# Execute with timeout
result = agent.run(task, timeout=30)
# Track success metrics
span.set_attribute("execution.success", True)
span.set_attribute("execution.result_size", len(str(result)))
return result
except TimeoutError as e:
span.record_exception(e)
span.set_attribute("error.type", "timeout")
return "Task timed out. Please try a simpler request."
except Exception as e:
span.record_exception(e)
span.set_attribute("error.type", type(e).__name__)
span.set_attribute("error.message", str(e))
raise
Deployment on Vertex AI
from google.adk.deployment import VertexAIDeployment
# Deploy agent to Vertex AI
deployment = VertexAIDeployment(
project_id="your-project-id",
region="us-central1"
)
# Deploy your agent
deployment.deploy(
agent=agent,
name="my-adk-agent",
version="1.0.0"
)
# The agent is now available as an API endpoint
endpoint = deployment.get_endpoint()
print(f"Agent deployed at: {endpoint}")
Performance Evaluation
from google.adk.evaluation import Evaluator
# Create evaluator
evaluator = Evaluator(
metrics=["accuracy", "latency", "cost"]
)
# Evaluate agent performance
test_cases = [
"What is the capital of France?",
"Explain quantum computing",
"Write a haiku about AI"
]
results = evaluator.evaluate(
agent=agent,
test_cases=test_cases
)
print(f"Average accuracy: {results['accuracy']}")
print(f"Average latency: {results['latency']}ms")
print(f"Total cost: ${results['cost']}")
Next Steps
✅ Verify traces: Check your Orq.ai dashboard to see incoming ADK agent traces
✅ Monitor agent workflows: Track sequential, parallel, and loop agent executions
✅ Analyze tool usage: Review which tools are called and their performance
✅ Track multi-agent coordination: Understand how agents collaborate in complex systems
✅ Optimize performance: Use trace data to identify bottlenecks in agent workflows
✅ Debug agent reasoning: Examine decision paths and tool selection patterns
Related Documentation
- Orq.ai Dashboard Guide
- Google ADK Documentation
- ADK GitHub Repository
- OpenTelemetry Best Practices
- Agent Development Best Practices
Support
Updated about 24 hours ago