Frameworks
Comprehensive OpenTelemetry integration for AI frameworks
Overview
Orq.ai is an OpenTelemetry-native backend for AI systems. Send us OTLP traces and we’ll turn them into rich insights about LLM calls, agent steps, tool invocations, retrievals, costs, tokens, and latency—using the official GenAI semantic conventions.
What We Collect
Our OpenTelemetry integration can instrument everything that OpenTelemetry instruments - databases, API calls, HTTP requests, and more.
Moreover, we've built support for collecting traces from AI-specific operations following the Official Specification.
Quick Start
Configure your environment to send Traces to Orq.ai.
Ensure you have an API Key ready to be used in place of
<ORQ_API_KEY>
Unix/Linux/macOS
export OTEL_EXPORTER_OTLP_ENDPOINT="https://api.orq.ai/v2/otel"
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer <ORQ_API_KEY>"
export OTEL_RESOURCE_ATTRIBUTES="service.name=your-service,service.version=1.0.0"
Using .env file
OTEL_EXPORTER_OTLP_ENDPOINT=https://api.orq.ai/v2/otel
OTEL_EXPORTER_OTLP_HEADERS=Authorization=Bearer <ORQ_API_KEY>
OTEL_RESOURCE_ATTRIBUTES=service.name=your-service,service.version=1.0.0
Send Traces with the OTEL SDK
import os
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
os.environ["OTEL_EXPORTER_OTLP_ENDPOINT"] = "https://api.orq.ai/v2/otel"
os.environ["OTEL_EXPORTER_OTLP_HEADERS"] = "Authorization=Bearer <ORQ_API_KEY>"
os.environ["OTEL_RESOURCE_ATTRIBUTES"] = "service.name=your-service,service.version=1.0.0"
provider = TracerProvider()
provider.add_span_processor(BatchSpanProcessor(OTLPSpanExporter()))
trace.set_tracer_provider(provider)
tracer = trace.get_tracer("your-service")
with tracer.start_as_current_span("example"):
# Add GenAI attributes per the spec
span = trace.get_current_span()
span.set_attribute("gen_ai.system", "openai")
span.set_attribute("gen_ai.request.model", "gpt-4o")
span.set_attribute("gen_ai.response.finish_reasons", ["stop"])
import { diag, DiagConsoleLogger, DiagLogLevel } from "@opentelemetry/api";
import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node";
import { BatchSpanProcessor } from "@opentelemetry/sdk-trace-base";
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-http";
import { Resource } from "@opentelemetry/resources";
import { SemanticResourceAttributes } from "@opentelemetry/semantic-conventions";
diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.ERROR);
const exporter = new OTLPTraceExporter({
url: "https://api.orq.ai/v2/otel/v1/traces",
headers: { Authorization: "Bearer <ORQ_API_KEY>" },
});
const provider = new NodeTracerProvider({
resource: new Resource({
[SemanticResourceAttributes.SERVICE_NAME]: "your-service",
[SemanticResourceAttributes.SERVICE_VERSION]: "1.0.0",
}),
});
provider.addSpanProcessor(new BatchSpanProcessor(exporter));
provider.register();
const tracer = provider.getTracer("your-service");
const span = tracer.startSpan("example");
span.setAttribute("gen_ai.system", "openai");
span.setAttribute("gen_ai.request.model", "gpt-4o-mini");
span.end();
(Optional) OpenTelemetry Collector
Use the Collector to centralize exporting.
receivers:
otlp:
protocols:
http:
grpc:
exporters:
otlphttp/orq:
endpoint: https://api.orq.ai/v2/otel
headers:
Authorization: Bearer ${ORQ_API_KEY}
processors:
batch: {}
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlphttp/orq]
Add GenAI SemConv Attributes
with tracer.start_as_current_span("llm.call") as span:
span.set_attribute("gen_ai.system", "openai")
span.set_attribute("gen_ai.request.type", "chat")
span.set_attribute("gen_ai.request.model", "gpt-4o-mini")
span.set_attribute("gen_ai.request.max_tokens", 256)
span.set_attribute("gen_ai.usage.input_tokens", 123)
span.set_attribute("gen_ai.usage.output_tokens", 89)
const span = tracer.startSpan("llm.call");
span.setAttribute("gen_ai.system", "anthropic");
span.setAttribute("gen_ai.request.type", "messages");
span.setAttribute("gen_ai.request.model", "claude-3-5");
span.setAttribute("gen_ai.usage.input_tokens", 410);
span.setAttribute("gen_ai.usage.output_tokens", 77);
span.end();
Framework Guides
CrewAI
A Python framework for orchestrating role-playing, autonomous AI agents that work together in coordinated crews to tackle complex multi-step tasks with advanced collaboration capabilities.
Google ADK
Google's open-source, code-first toolkit for building and deploying sophisticated multi-agent AI systems with flexible orchestration, rich tool ecosystems, and enterprise-grade deployment options.
LangChain / LangGraph
A Python/JavaScript framework for building applications with large language models through composable chains, agents, and integrations with external data sources and APIs.
LlamaIndex
A Python framework for building RAG (Retrieval-Augmented Generation) applications with comprehensive document indexing, vector search, and query processing capabilities for knowledge-driven AI systems.
OpenAI Agents
OpenAI's official SDK for building stateful, multi-turn AI agents that can use tools, maintain conversation context, and handle complex workflows with built-in session management.
Vercel AI SDK
A TypeScript-first SDK for building AI-powered applications with streaming responses, tool calling, and multi-provider support, optimized for React, Next.js, and edge runtime deployments.
Updated 2 days ago