Frameworks
Comprehensive OpenTelemetry integration for AI frameworks
Overview
Orq.ai provides comprehensive OpenTelemetry (OTEL) integration for monitoring and tracing AI applications. Our platform collects traces from your AI frameworks and provides deep insights into LLM interactions, agent behavior, tool usage, and system performance.
What We Instrument
Our OpenTelemetry integration can instrument everything that OpenTelemetry already instruments - databases, API calls, HTTP requests, and more. On top of that, we've built support for collecting traces from AI-specific operations following the oficial specification
Quick Start
Configure your environment to send traces to Orq.ai:
Unix/Linux/macOS:
export OTEL_EXPORTER_OTLP_ENDPOINT="https://api.orq.ai/v2/otel"
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer <ORQ_API_KEY>"
export OTEL_RESOURCE_ATTRIBUTES="service.name=your-service,service.version=1.0.0"
Using .env file:
OTEL_EXPORTER_OTLP_ENDPOINT=https://api.orq.ai/v2/otel
OTEL_EXPORTER_OTLP_HEADERS=Authorization=Bearer <ORQ_API_KEY>
OTEL_RESOURCE_ATTRIBUTES=service.name=your-service,service.version=1.0.0
OpenTelemetry Collection Frameworks
Choose from multiple OpenTelemetry collection frameworks based on your needs:
🚀 OpenLit
Best for: Quick setup with automatic instrumentation Supports: LangChain, LlamaIndex, OpenAI Agents, LiteLLM, CrewAI, Pydantic AI, DSPy, AutoGen, Haystack
import openlit
openlit.init(
otlp_endpoint="https://api.orq.ai/v2/otel",
otlp_headers="Authorization=Bearer <ORQ_API_KEY>"
)
import Openlit from "openlit"
Openlit.init({
otlpEndpoint: "https://api.orq.ai/v2/otel",
otlpHeaders: "Authorization=Bearer <ORQ_API_KEY>",
});
🔥 Logfire
Best for: Pydantic ecosystem and rich visualization Supports: Pydantic AI, OpenAI, Anthropic, LangChain, LlamaIndex, Mirascope, LiteLLM
import logfire
# Configure logfire instrumentation.
logfire.configure(
service_name='my_agent_service',
send_to_logfire=False,
)
# This method automatically patches the OpenAI Agents SDK to send logs via OTLP to Langfuse.
logfire.instrument_openai_agents()
📊 MLFlow
Best for: ML experimentation and model lifecycle Supports: OpenAI, LangChain, LangGraph, OpenAI Agents, LlamaIndex, CrewAI, Semantic Kernel, DSPy, AutoGen, Instructor, Smolagents, Agno
import mlflow
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace.export import SimpleSpanProcessor
from opentelemetry import trace
os.environ["OTEL_EXPORTER_OTLP_ENDPOINT"] = "https://api.orq.ai/v2/otel"
os.environ["OTEL_EXPORTER_OTLP_HEADERS"] = f"Authorization=Basic {ORQ_API_KEY}"
trace_provider = TracerProvider()
trace_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter()))
trace.set_tracer_provider(trace_provider)
# Creates a tracer from the global tracer provider
tracer = trace.get_tracer(__name__)
mlflow.langchain.autolog()
🎯 OpenInference
Best for: Arize ecosystem integration Supports: OpenAI, LlamaIndex, LangChain, DSPy, Anthropic, Bedrock
import openai
from openinference.instrumentation.openai import OpenAIInstrumentor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk import trace as trace_sdk
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor
endpoint = "https://api.orq.ai/v2/otel/v1/traces"
tracer_provider = trace_sdk.TracerProvider()
tracer_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint)))
# Optionally, you can also print the spans to the console.
tracer_provider.add_span_processor(SimpleSpanProcessor(ConsoleSpanExporter()))
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)
if __name__ == "__main__":
client = openai.OpenAI()
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Write a haiku."}],
max_tokens=20,
stream=True,
stream_options={"include_usage": True},
)
for chunk in response:
if chunk.choices and (content := chunk.choices[0].delta.content):
print(content, end="")
🔍 OpenLLMetry
Best for: Non-intrusive tracing Supports: OpenAI, Anthropic, Cohere, LangChain, LlamaIndex, Haystack, LiteLLM, CrewAI
from traceloop.sdk import Traceloop
os.environ["TRACELOOP_BASE_URL"] = "https://api.orq.ai/v2/otel"
os.environ["TRACELOOP_HEADERS"] = f"Authorization=Basic%20{ORQ_API_KEY}"
from openai import OpenAI
from traceloop.sdk import Traceloop
from traceloop.sdk.decorators import workflow
Traceloop.init(disable_batch=True)
client = OpenAI()
@workflow(name="story")
def run_story_stream(client):
completion = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Where Orq.ai exceeds compared to other frameworks."}],
)
return completion.choices[0].message.content
print(run_story_stream(client))
Framework Guides
LangChain / LangGraph
Build complex chains, agents, and graph workflows with comprehensive tracing and monitoring.
LlamaIndex
Monitor RAG pipelines, document processing, and embedding operations with detailed insights.
Pydantic AI
Type-safe AI agents with built-in validation and native Pydantic ecosystem integration.
CrewAI
Multi-agent orchestration with full visibility into agent collaboration and task execution.
OpenAI Agents
Native OpenAI Swarm and Assistant API integration with automatic instrumentation.
Semantic Kernel
Microsoft's AI orchestration framework with enterprise-grade observability.
AutoGen / AG2
Multi-agent conversation systems with detailed interaction tracking and analysis.
DSPy
Declarative language model programming with automatic optimization tracking.
Haystack
End-to-end NLP pipelines with comprehensive component-level monitoring.
LiteLLM
Unified LLM interface with automatic fallbacks and load balancing insights.
Instructor
Structured output extraction with validation tracking and retry monitoring.
Vercel AI SDK
React and Next.js AI applications with streaming support and telemetry.
Google AI SDK
Google Generative AI and Vertex AI integration with native telemetry.
Smolagents
Lightweight agent framework with tool usage tracking and debugging.
Mastra
AI workflow orchestration with visual debugging and performance analysis.
LiveKit
Real-time AI communication with latency tracking and quality metrics.
Agno
Cognitive architecture framework with reasoning path visualization.
BeeAI
Swarm intelligence framework with collective behavior monitoring.
Start monitoring your AI applications today with comprehensive OpenTelemetry tracing from Orq.ai
Updated about 20 hours ago