Frameworks

Comprehensive OpenTelemetry integration for AI frameworks

Overview

Orq.ai is an OpenTelemetry-native backend for AI systems. Send us OTLP traces and we’ll turn them into rich insights about LLM calls, agent steps, tool invocations, retrievals, costs, tokens, and latency—using the official GenAI semantic conventions.

What We Collect

Our OpenTelemetry integration can instrument everything that OpenTelemetry instruments - databases, API calls, HTTP requests, and more.

Moreover, we've built support for collecting traces from AI-specific operations following the Official Specification.

Quick Start

Configure your environment to send Traces to Orq.ai.

📘

Ensure you have an API Key ready to be used in place of <ORQ_API_KEY>

Unix/Linux/macOS

export OTEL_EXPORTER_OTLP_ENDPOINT="https://api.orq.ai/v2/otel"
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer <ORQ_API_KEY>"
export OTEL_RESOURCE_ATTRIBUTES="service.name=your-service,service.version=1.0.0"

Using .env file

OTEL_EXPORTER_OTLP_ENDPOINT=https://api.orq.ai/v2/otel
OTEL_EXPORTER_OTLP_HEADERS=Authorization=Bearer <ORQ_API_KEY>
OTEL_RESOURCE_ATTRIBUTES=service.name=your-service,service.version=1.0.0

Send Traces with the OTEL SDK

import os
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter

os.environ["OTEL_EXPORTER_OTLP_ENDPOINT"] = "https://api.orq.ai/v2/otel"
os.environ["OTEL_EXPORTER_OTLP_HEADERS"] = "Authorization=Bearer <ORQ_API_KEY>"
os.environ["OTEL_RESOURCE_ATTRIBUTES"] = "service.name=your-service,service.version=1.0.0"

provider = TracerProvider()
provider.add_span_processor(BatchSpanProcessor(OTLPSpanExporter()))
trace.set_tracer_provider(provider)
tracer = trace.get_tracer("your-service")

with tracer.start_as_current_span("example"):
    # Add GenAI attributes per the spec
    span = trace.get_current_span()
    span.set_attribute("gen_ai.system", "openai")
    span.set_attribute("gen_ai.request.model", "gpt-4o")
    span.set_attribute("gen_ai.response.finish_reasons", ["stop"])
import { diag, DiagConsoleLogger, DiagLogLevel } from "@opentelemetry/api";
import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node";
import { BatchSpanProcessor } from "@opentelemetry/sdk-trace-base";
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-http";
import { Resource } from "@opentelemetry/resources";
import { SemanticResourceAttributes } from "@opentelemetry/semantic-conventions";

diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.ERROR);

const exporter = new OTLPTraceExporter({
  url: "https://api.orq.ai/v2/otel/v1/traces",
  headers: { Authorization: "Bearer <ORQ_API_KEY>" },
});

const provider = new NodeTracerProvider({
  resource: new Resource({
    [SemanticResourceAttributes.SERVICE_NAME]: "your-service",
    [SemanticResourceAttributes.SERVICE_VERSION]: "1.0.0",
  }),
});

provider.addSpanProcessor(new BatchSpanProcessor(exporter));
provider.register();

const tracer = provider.getTracer("your-service");
const span = tracer.startSpan("example");
span.setAttribute("gen_ai.system", "openai");
span.setAttribute("gen_ai.request.model", "gpt-4o-mini");
span.end();

(Optional) OpenTelemetry Collector

Use the Collector to centralize exporting.

receivers:
  otlp:
    protocols:
      http:
      grpc:

exporters:
  otlphttp/orq:
    endpoint: https://api.orq.ai/v2/otel
    headers:
      Authorization: Bearer ${ORQ_API_KEY}

processors:
  batch: {}

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlphttp/orq]

Add GenAI SemConv Attributes

with tracer.start_as_current_span("llm.call") as span:
    span.set_attribute("gen_ai.system", "openai")
    span.set_attribute("gen_ai.request.type", "chat")
    span.set_attribute("gen_ai.request.model", "gpt-4o-mini")
    span.set_attribute("gen_ai.request.max_tokens", 256)
    span.set_attribute("gen_ai.usage.input_tokens", 123)
    span.set_attribute("gen_ai.usage.output_tokens", 89)
const span = tracer.startSpan("llm.call");
span.setAttribute("gen_ai.system", "anthropic");
span.setAttribute("gen_ai.request.type", "messages");
span.setAttribute("gen_ai.request.model", "claude-3-5");
span.setAttribute("gen_ai.usage.input_tokens", 410);
span.setAttribute("gen_ai.usage.output_tokens", 77);
span.end();

Framework Guides