Pydantic AI

Integrate Orq.ai with Pydantic AI using OpenTelemetry

Getting Started

Pydantic AI is a Python agent framework designed to make it easier to build production-grade applications with Generative AI. Integrate with Orq.ai to gain complete observability into your AI agent interactions, tool usage, model performance, and conversation flows.

Prerequisites

Before you begin, ensure you have:

  • An Orq.ai account and API key
  • Pydantic AI installed in your project
  • Python 3.9+
  • A supported LLM provider (OpenAI, Anthropic, etc.)

Install Dependencies

# Core Pydantic AI packages
pip install pydantic-ai

# Your preferred LLM provider
pip install openai  # or anthropic, groq, etc.

# Core OpenTelemetry packages (required for all integration methods)
pip install opentelemetry-sdk opentelemetry-exporter-otlp

Configure Orq.ai

Set up your environment variables to connect to Orq.ai's OpenTelemetry collector:

Unix/Linux/macOS:

export OTEL_EXPORTER_OTLP_ENDPOINT="https://api.orq.ai/v2/otel"
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer <ORQ_API_KEY>"
export OTEL_RESOURCE_ATTRIBUTES="service.name=pydantic-ai-app,service.version=1.0.0"

Windows (PowerShell):

$env:OTEL_EXPORTER_OTLP_ENDPOINT = "https://api.orq.ai/v2/otel"
$env:OTEL_EXPORTER_OTLP_HEADERS = "Authorization=Bearer <ORQ_API_KEY>"
$env:OTEL_RESOURCE_ATTRIBUTES = "service.name=pydantic-ai-app,service.version=1.0.0"

Using .env file:

OTEL_EXPORTER_OTLP_ENDPOINT=https://api.orq.ai/v2/otel
OTEL_EXPORTER_OTLP_HEADERS=Authorization=Bearer <ORQ_API_KEY>
OTEL_RESOURCE_ATTRIBUTES=service.name=pydantic-ai-app,service.version=1.0.0

Integrations

Choose your preferred OpenTelemetry framework for collecting traces:

OpenLit

Best for: Quick setup with automatic instrumentation (Supports Pydantic AI SDK >= 0.2.17)

pip install openlit
import openlit
import os

# Initialize OpenLit with Orq.ai endpoint
openlit.init(
    otlp_endpoint=os.getenv("OTEL_EXPORTER_OTLP_ENDPOINT"),
    otlp_headers=os.getenv("OTEL_EXPORTER_OTLP_HEADERS")
)

# Your Pydantic AI code works normally - tracing is automatic!
from pydantic_ai import Agent

agent = Agent('openai:gpt-4o-mini')
result = agent.run('What is the capital of France?')
print(result.data)

Logfire

Best for: Native Pydantic ecosystem integration and rich visualization

pip install logfire
import logfire
from pydantic_ai import Agent

# Configure Logfire
logfire.configure()

# Instrument Pydantic AI automatically
logfire.instrument_pydantic_ai()

# Your agent code works normally with automatic tracing
agent = Agent('openai:gpt-4o-mini')
result = agent.run('What is the capital of France?')
print(result.data)

MLFlow

Best for: ML experimentation and model lifecycle management

pip install mlflow
import mlflow
import os
from pydantic_ai import Agent

# Configure MLflow tracking
mlflow.set_tracking_uri(os.getenv("OTEL_EXPORTER_OTLP_ENDPOINT"))

# Use MLflow context for tracking
with mlflow.start_run():
    agent = Agent('openai:gpt-4o-mini')
    result = agent.run('What is the capital of France?')

    # Log metrics
    mlflow.log_param("model", "gpt-4o-mini")
    mlflow.log_metric("response_length", len(result.data))

    print(result.data)

OpenInference

Best for: Arize ecosystem integration (experimental support)

pip install openinference-instrumentation
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk import trace as trace_sdk
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from pydantic_ai import Agent
import os

# Initialize OpenTelemetry
tracer_provider = trace_sdk.TracerProvider()
tracer_provider.add_span_processor(
    BatchSpanProcessor(
        OTLPSpanExporter(
            endpoint=os.getenv("OTEL_EXPORTER_OTLP_ENDPOINT"),
            headers={"Authorization": f"Bearer {os.getenv('ORQ_API_KEY')}"}
        )
    )
)
trace.set_tracer_provider(tracer_provider)

# Manual instrumentation for Pydantic AI
agent = Agent('openai:gpt-4o-mini')
result = agent.run('What is the capital of France?')
print(result.data)

Manual OpenTelemetry

Best for: Full control and custom instrumentation

from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from pydantic_ai import Agent
import os

# Initialize OpenTelemetry
resource = Resource.create({
    "service.name": os.getenv("OTEL_SERVICE_NAME", "pydantic-ai-app"),
})
provider = TracerProvider(resource=resource)
processor = BatchSpanProcessor(OTLPSpanExporter())
provider.add_span_processor(processor)
trace.set_tracer_provider(provider)
tracer = trace.get_tracer("pydantic-ai")

# Manual tracing
with tracer.start_as_current_span("pydantic-ai.run", attributes={"model": "gpt-4o-mini"}):
    agent = Agent('openai:gpt-4o-mini')
    result = agent.run('What is the capital of France?')
    print(result.data)

Examples

Basic Agent Example

from pydantic_ai import Agent

# Setup your chosen integration above, then use Pydantic AI normally
agent = Agent('openai:gpt-4o-mini')

# Simple query
result = agent.run('Explain quantum computing in simple terms')
print(result.data)

Agent with System Prompt

from pydantic_ai import Agent, RunContext

agent = Agent(
    'openai:gpt-4o-mini',
    system_prompt='You are a helpful assistant that always responds in a professional tone.'
)

result = agent.run('What are the benefits of renewable energy?')
print(result.data)

Agent with Tools

from pydantic_ai import Agent, RunContext
import httpx

agent = Agent('openai:gpt-4o-mini')

@agent.tool
async def get_weather(ctx: RunContext[None], location: str) -> str:
    """Get current weather for a location."""
    # Simulate weather API call
    return f"The weather in {location} is sunny and 75°F"

@agent.tool
async def search_web(ctx: RunContext[None], query: str) -> str:
    """Search the web for information."""
    # Simulate web search
    return f"Search results for '{query}': Found relevant information about the topic."

# Use the agent with tools
result = agent.run('What is the weather like in San Francisco and find recent news about climate change?')
print(result.data)

Structured Response Example

from pydantic_ai import Agent
from pydantic import BaseModel
from typing import List

class BookRecommendation(BaseModel):
    title: str
    author: str
    genre: str
    rating: float
    summary: str

class BookList(BaseModel):
    recommendations: List[BookRecommendation]
    total_count: int

agent = Agent('openai:gpt-4o-mini', result_type=BookList)

result = agent.run('Recommend 3 science fiction books published in the last 5 years')
print(f"Found {result.data.total_count} recommendations:")
for book in result.data.recommendations:
    print(f"- {book.title} by {book.author} ({book.rating}/5)")

Conversation Context Example

from pydantic_ai import Agent

agent = Agent('openai:gpt-4o-mini')

# Start a conversation
messages = []

# First interaction
result1 = agent.run('My name is Alice and I love cooking')
messages.extend(result1.all_messages())
print("Assistant:", result1.data)

# Continue the conversation with context
result2 = agent.run(
    'What recipes would you recommend for someone like me?',
    message_history=messages
)
messages.extend(result2.all_messages())
print("Assistant:", result2.data)

Error Handling with Custom Spans

from opentelemetry import trace
from pydantic_ai import Agent
from pydantic_ai.exceptions import ModelRetryError

tracer = trace.get_tracer("pydantic-ai-custom")

with tracer.start_as_current_span("pydantic-ai.error_handling") as span:
    try:
        agent = Agent('openai:gpt-4o-mini')
        result = agent.run('Test query')

        # Add success attributes
        span.set_attribute("operation.success", True)
        span.set_attribute("response.length", len(result.data))
        span.set_attribute("model.name", "gpt-4o-mini")

    except ModelRetryError as e:
        # Record the exception
        span.record_exception(e)
        span.set_attribute("operation.success", False)
        span.set_attribute("error.type", "ModelRetryError")
        raise
    except Exception as e:
        # Record other exceptions
        span.record_exception(e)
        span.set_attribute("operation.success", False)
        span.set_attribute("error.type", type(e).__name__)
        raise

Next Steps

Verify traces: Check your Orq.ai dashboard to see incoming Pydantic AI traces ✅ Add custom attributes: Enhance traces with user IDs, session info, or business metrics ✅ Monitor agent performance: Track response times, token usage, and tool effectiveness ✅ Set up alerts: Configure monitoring for agent failures or unexpected behavior ✅ Analyze conversations: Use trace data to understand user interaction patterns ✅ Optimize prompts: Use performance data to improve system prompts and tool descriptions

Related Documentation

Support