Skip to main content

Overview

Orq.ai is an OpenTelemetry-native backend for AI systems. Send us OTLP traces and we’ll turn them into rich insights about LLM calls, agent steps, tool invocations, retrievals, costs, tokens, and latency—using the official GenAI semantic conventions.

What We Collect

Our OpenTelemetry integration can instrument everything that OpenTelemetry instruments - databases, API calls, HTTP requests, and more. Moreover, we’ve built support for collecting traces from AI-specific operations following the Official Specification.

Quick Start

Configure your environment to send Traces to Orq.ai.
Ensure you have an API Key ready to be used in place of <ORQ_API_KEY>

Unix/Linux/macOS

export OTEL_EXPORTER_OTLP_ENDPOINT="https://api.orq.ai/v2/otel"
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer <ORQ_API_KEY>"
export OTEL_RESOURCE_ATTRIBUTES="service.name=your-service,service.version=1.0.0"

Using .env file

OTEL_EXPORTER_OTLP_ENDPOINT=https://api.orq.ai/v2/otel
OTEL_EXPORTER_OTLP_HEADERS=Authorization=Bearer <ORQ_API_KEY>
OTEL_RESOURCE_ATTRIBUTES=service.name=your-service,service.version=1.0.0

Send Traces with the OTEL SDK

import os
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter

os.environ["OTEL_EXPORTER_OTLP_ENDPOINT"] = "https://api.orq.ai/v2/otel"
os.environ["OTEL_EXPORTER_OTLP_HEADERS"] = "Authorization=Bearer <ORQ_API_KEY>"
os.environ["OTEL_RESOURCE_ATTRIBUTES"] = "service.name=your-service,service.version=1.0.0"

provider = TracerProvider()
provider.add_span_processor(BatchSpanProcessor(OTLPSpanExporter()))
trace.set_tracer_provider(provider)
tracer = trace.get_tracer("your-service")

with tracer.start_as_current_span("example"):
    # Add GenAI attributes per the spec
    span = trace.get_current_span()
    span.set_attribute("gen_ai.system", "openai")
    span.set_attribute("gen_ai.request.model", "gpt-4o")
    span.set_attribute("gen_ai.response.finish_reasons", ["stop"])

(Optional) OpenTelemetry Collector

Use the Collector to centralize exporting.
receivers:
  otlp:
    protocols:
      http:
      grpc:

exporters:
  otlphttp/orq:
    endpoint: https://api.orq.ai/v2/otel
    headers:
      Authorization: Bearer ${ORQ_API_KEY}

processors:
  batch: {}

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlphttp/orq]

Add GenAI SemConv Attributes

with tracer.start_as_current_span("llm.call") as span:
    span.set_attribute("gen_ai.system", "openai")
    span.set_attribute("gen_ai.request.type", "chat")
    span.set_attribute("gen_ai.request.model", "gpt-4o-mini")
    span.set_attribute("gen_ai.request.max_tokens", 256)
    span.set_attribute("gen_ai.usage.input_tokens", 123)
    span.set_attribute("gen_ai.usage.output_tokens", 89)

Using @traced Decorator with Python SDK

We’ve introduced the @traced decorator, a simple way to capture function-level traces directly in your Python code.
  • Automatically logs function inputs, outputs, and metadata
  • Supports nested spans and custom span types (LLM, agent, tool, etc.)
  • Works seamlessly with the Orq SDK initialization (no separate init required)
  • Integrates with OpenTelemetry for end-to-end distributed tracing
import time
import os
from orq_ai_sdk import traced, Orq

# Initialize Orq SDK - the traced decorator will automatically use this configuration
orq = Orq(
    api_key=os.environ.get('ORQ_API_KEY', '<ORQ_API_KEY>')
)

@traced
def process_user(user_id: str, action: str) -> dict:
	# Simulate some processing
	time.sleep(0.1)

	result = {
		"user_id": user_id,
		"action": action,
		"status": "completed",
		"timestamp": time.time()
	}
	return result

Framework Guides