Documentation Index Fetch the complete documentation index at: https://docs.orq.ai/llms.txt
Use this file to discover all available pages before exploring further.
AI Router Route your LLM calls through the AI Router with a single base URL change. Zero vendor lock-in: always run on the best model at the lowest cost for your use case.
Observability Instrument your code with OpenTelemetry to capture traces, logs, and metrics for every LLM call, agent step, and tool use.
AI Router
Overview
Pydantic AI is a Python agent framework designed to make it easier to build production-grade applications with Generative AI. By connecting Pydantic AI to Orq.ai’s AI Router, you transform experimental agents into production-ready systems with enterprise-grade capabilities.
Key Benefits
Orq.ai’s AI Router enhances your Pydantic AI agents with:
Complete Observability Track every agent step, tool use, and interaction with detailed traces and analytics
Built-in Reliability Automatic fallbacks, retries, and load balancing for production resilience
Cost Optimization Real-time cost tracking and spend management across all your AI operations
Multi-Provider Access Access 300+ LLMs and 20+ providers through a single, unified integration
Prerequisites
Before integrating Pydantic AI with Orq.ai, ensure you have:
An Orq.ai account and API Key
Python 3.9 or higher
Pydantic AI SDK installed
Installation
Install Pydantic AI and the OpenAI SDK:
pip install pydantic-ai openai
Configuration
Configure Pydantic AI to use Orq.ai’s AI Router by passing a custom OpenAI client:
from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIChatModel
from pydantic_ai.providers.openai import OpenAIProvider
from openai import AsyncOpenAI
import os
# Configure OpenAI client with Orq.ai AI Router
client = AsyncOpenAI(
api_key = os.getenv( "ORQ_API_KEY" ),
base_url = "https://api.orq.ai/v3/router"
)
# Create provider with custom client
provider = OpenAIProvider( openai_client = client)
# Create model and agent
model = OpenAIChatModel( "gpt-4o" , provider = provider)
agent = Agent( model = model)
base_url : https://api.orq.ai/v3/router
Basic Agent Example
Here’s a complete example of creating and running a Pydantic AI agent through Orq.ai:
from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIChatModel
from pydantic_ai.providers.openai import OpenAIProvider
from openai import AsyncOpenAI
import os
# Configure client with Orq.ai AI Router
client = AsyncOpenAI(
api_key = os.getenv( "ORQ_API_KEY" ),
base_url = "https://api.orq.ai/v3/router"
)
# Create provider and model
provider = OpenAIProvider( openai_client = client)
model = OpenAIChatModel( "gpt-4o" , provider = provider)
# Create agent
agent = Agent(
model = model,
system_prompt = "You are a helpful assistant."
)
# Run the agent
result = agent.run_sync( "Explain quantum computing in simple terms" )
print (result.output)
Pydantic AI agents can use tools while routing through Orq.ai:
from pydantic_ai import Agent, RunContext
from pydantic_ai.models.openai import OpenAIChatModel
from pydantic_ai.providers.openai import OpenAIProvider
from openai import AsyncOpenAI
import os
# Configure client
client = AsyncOpenAI(
api_key = os.getenv( "ORQ_API_KEY" ),
base_url = "https://api.orq.ai/v3/router"
)
# Create provider and model
provider = OpenAIProvider( openai_client = client)
model = OpenAIChatModel( "gpt-4o" , provider = provider)
# Create agent
agent = Agent( model = model)
@agent.tool
async def get_weather (ctx: RunContext[ None ], location: str ) -> str :
"""Get current weather for a location."""
return f "The weather in { location } is sunny and 75°F"
# Run agent with tool access
result = agent.run_sync( "What's the weather in San Francisco?" )
print (result.output)
Structured Outputs
Pydantic AI excels at structured outputs with type-safe validation:
from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIChatModel
from pydantic_ai.providers.openai import OpenAIProvider
from openai import AsyncOpenAI
from pydantic import BaseModel
import os
class CityInfo ( BaseModel ):
name: str
country: str
population: int
famous_for: str
# Configure client
client = AsyncOpenAI(
api_key = os.getenv( "ORQ_API_KEY" ),
base_url = "https://api.orq.ai/v3/router"
)
# Create provider and model
provider = OpenAIProvider( openai_client = client)
model = OpenAIChatModel( "gpt-4o" , provider = provider)
# Create agent with structured output
agent = Agent( model = model, output_type = CityInfo)
result = agent.run_sync( "Tell me about Paris" )
print ( f " { result.output.name } , { result.output.country } " )
print ( f "Population: { result.output.population :, } " )
print ( f "Famous for: { result.output.famous_for } " )
Model Selection
With Orq.ai, you can use any supported model from 20+ providers:
from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIChatModel
from pydantic_ai.providers.openai import OpenAIProvider
from openai import AsyncOpenAI
import os
# Configure client
client = AsyncOpenAI(
api_key = os.getenv( "ORQ_API_KEY" ),
base_url = "https://api.orq.ai/v3/router"
)
# Create provider
provider = OpenAIProvider( openai_client = client)
# Use Claude
claude_model = OpenAIChatModel( "claude-sonnet-4-5-20250929" , provider = provider)
claude_agent = Agent( model = claude_model)
# Use Gemini
gemini_model = OpenAIChatModel( "gemini-2.5-flash" , provider = provider)
gemini_agent = Agent( model = gemini_model)
# Use any other model
groq_model = OpenAIChatModel( "llama-3.3-70b-versatile" , provider = provider)
groq_agent = Agent( model = groq_model)
Observability
Getting Started
Integrate Pydantic AI with Orq.ai’s observability to gain complete insights into your AI agent interactions, tool usage, model performance, and conversation flows using OpenTelemetry.
Prerequisites
Before you begin, ensure you have:
An Orq.ai account and API Key
Pydantic AI installed in your project
Python 3.9+
Install Dependencies
# Core Pydantic AI packages
pip install pydantic-ai openai
# Core OpenTelemetry packages
pip install opentelemetry-sdk opentelemetry-exporter-otlp logfire
Set up your environment variables to connect to Orq.ai’s OpenTelemetry collector:
Unix/Linux/macOS:
export OTEL_EXPORTER_OTLP_ENDPOINT = "https://api.orq.ai/v2/otel"
export OTEL_EXPORTER_OTLP_HEADERS = "Authorization=Bearer <ORQ_API_KEY>"
export OTEL_RESOURCE_ATTRIBUTES = "service.name=pydantic-ai-app,service.version=1.0.0"
Windows (PowerShell):
$env:OTEL_EXPORTER_OTLP_ENDPOINT = "https://api.orq.ai/v2/otel"
$env:OTEL_EXPORTER_OTLP_HEADERS = "Authorization=Bearer <ORQ_API_KEY>"
$env:OTEL_RESOURCE_ATTRIBUTES = "service.name=pydantic-ai-app,service.version=1.0.0"
Using .env file:
OTEL_EXPORTER_OTLP_ENDPOINT = https://api.orq.ai/v2/otel
OTEL_EXPORTER_OTLP_HEADERS = Authorization = Bearer <ORQ_API_KEY>
OTEL_RESOURCE_ATTRIBUTES=service.name=pydantic-ai-app,service.version=1.0.0
Integration Example
Using LogFire for OpenTelemetry tracing:
import logfire
from pydantic_ai import Agent
# Configure Logfire
logfire.configure(
service_name = 'orq-traces' ,
send_to_logfire = False , # Disable sending to Logfire cloud
)
# Instrument Pydantic AI automatically
logfire.instrument_pydantic_ai()
# Your agent code works normally with automatic tracing
agent = Agent( 'openai:gpt-4o-mini' )
result = agent.run_sync( 'What is the capital of France?' )
print (result.output)
View Traces
View your traces in the AI Studio in the Traces tab.
Visit your AI Studio to view real-time analytics and traces.
Evaluations & Experiments
Once your agents are running, use Evaluatorq to score outputs across a dataset and Experiments to compare configurations side-by-side.
Run Evaluations with Evaluatorq Run parallel evaluations across your agents and compare results.
Run Experiments via the API Compare agent configurations and view results in the AI Studio.