Skip to main content
OpenTelemetry (OTel) is an open-source observability framework for instrumenting applications to generate and export telemetry data such as traces, metrics and logs. It provides standardized APIs and SDKs for capturing telemetry data, which can be sent to an observability backend (like Orq.ai). Major AI frameworks including Vercel AI SDK, LlamaIndex, and Traceloop OpenLLMetry have adopted it for observability with comprehensive SDK support across languages.

[KYRA: 1) open with 2-3 very simple and clear sentences that you can now monitor and observce any external framework in orq using our Traces feature. 2) Follow with stating the different methods we have to set it up. (e.g. SDK, but we also have the @traced decorator) 3) Ideally, the implicit underlying tone is, how easy it is to set it up. 4) Also make clear that the native integration we have with some like langchain and langgraph allows for better mapping of the events. 5) Ideally it is a tutorial for people who currently have an agent in another system and we explain them really simply and well how they start tracing that other system with Orq 6) Use complex agents for your examples to cover how good our tracing module is in including different tools etc. 7) I’ve been getting questions on data and privacy, so maybe good to include a paragraph at the end around this topic]
┌─────────────────────────────────────────┐
│       Your LLM Application                                       │
│  ┌───────────────────────────────────┐     │
│  │  LangChain • LlamaIndex • OpenAI 						 │     │
│  └───────────────────────────────────┘     │
│                                       						   │
│  ┌───────────────────────────────────┐     │
│  │   OpenTelemetry SDK          						     │     │
│  │   ✓ Auto-instrumentation     						     │     │
│  │   ✓ Trace generation             						 │     │
│  │   ✓ Metric collection           					     │     │
│  └───────────────────────────────────┘     │
└────────────────┬────────────────────────┘

                 │ Exports telemetry data
                 │ via OTLP/HTTP protocol


┌─────────────────────────────────────────┐
│     orq.ai Observability Backend      			 			   │
│                                         						   │
│  🔄 Ingestion                        							   │
│     Receives & processes OTel data    						   │
│                                      							   │
│  💾 Storage                             						   │
│     Stores traces, metrics & logs    							   │
│                                     						       │
│  🧠 Analysis                         							   │
│     • Automatic cost calculation      						   │
│     • Token usage tracking          						       │
│     • Performance monitoring          						   │
│                                         						   │
│  📊 Visualization                       						   │
│     • Trace explorer & waterfall view  						   │
│     • Dashboards & analytics            						   │
│     • Prompt versioning & evaluation   						   │
│                                        						   │
└─────────────────────────────────────────┘

Prerequisites

  • Orq.ai API key
  • Python 3.8+

Quick start

Let’s create a simple traced LLM call to verify your setup:
import os
from openai import OpenAI
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import Resource, SERVICE_NAME

# Configure OpenTelemetry
os.environ["OTEL_EXPORTER_OTLP_ENDPOINT"] = "https://api.orq.ai/v2/otel"
os.environ["OTEL_EXPORTER_OTLP_HEADERS"] = f"Authorization=Bearer {YOUR_ORQ_API_KEY}"

# Set up the tracer
resource = Resource.create({SERVICE_NAME: "hello-world-llm"})
provider = TracerProvider(resource=resource)
provider.add_span_processor(BatchSpanProcessor(OTLPSpanExporter()))
trace.set_tracer_provider(provider)

# Get a tracer
tracer = trace.get_tracer(__name__)

# Initialize OpenAI client
client = OpenAI()

# Create a traced LLM call
with tracer.start_as_current_span("hello_world_llm_call") as span:
    # Set span attributes for better observability
    span.set_attribute("gen_ai.system", "openai")
    span.set_attribute("gen_ai.request.model", "gpt-4")
    
    # Make the LLM call
    response = client.chat.completions.create(
        model="gpt-4",
        messages=[
            {"role": "user", "content": "Say hello in 3 languages"}
        ]
    )
    
    # Record response metadata
    span.set_attribute("gen_ai.response.model", response.model)
    span.set_attribute("gen_ai.usage.input_tokens", response.usage.prompt_tokens)
    span.set_attribute("gen_ai.usage.output_tokens", response.usage.completion_tokens)
    
    print(response.choices[0].message.content)

# Ensure spans are exported before exit
trace.get_tracer_provider().force_flush()
Navigate to your orq.ai dashboard at https://app.orq.ai/traces to see your first trace! Endpoint Configuration
Traces:  https://api.orq.ai/v1/otel/v1/traces
Metrics: https://api.orq.ai/v1/otel/v1/metrics
Logs:    https://api.orq.ai/v1/otel/v1/logs
Environment Variables
  • OTEL_EXPORTER_OTLP_ENDPOINT
    OTEL_EXPORTER_OTLP_ENDPOINT=https://api.orq.ai/v1/otel
    
  • OTEL_EXPORTER_OTLP_HEADERS
    OTEL_EXPORTER_OTLP_HEADERS=Authorization=Bearer ${ORQ_API_KEY},X-Workspace-ID=${ORQ_WORKSPACE_ID}
    
  • Signal-specific endpoints
    # Traces
    OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=https://api.orq.ai/v1/otel/v1/traces
    
    # Metrics
    OTEL_EXPORTER_OTLP_METRICS_ENDPOINT=https://api.orq.ai/v1/otel/v1/metrics
    
    # Logs
    OTEL_EXPORTER_OTLP_LOGS_ENDPOINT=https://api.orq.ai/v1/otel/v1/logs
    

Integration Examples

  • Native SDK Integration - use Orq.ai SDK
  • Framework-Specific Examples:
    • LangChain
    • LlamaIndex
    • Vercel AI SDK
    • Google ADK
    • CrewAI
    • OpenAI Agents
  • Custom Implementation using OpenTelemetry SDK directly

Property Mapping / Attribute Reference

This is crucial - document how OTel attributes map to orq.ai’s data model:
  • Trace-level attributes (name, userId, sessionId, tags, metadata)
  • Observation/Span-level attributes (type, input, output, model, usage, cost)
  • GenAI semantic conventions support
  • Custom orq.ai-specific attributes (e.g., orq.* namespace)

Advanced Features

  • Using OpenTelemetry Collector for fan-out
  • Filtering spans before sending to orq.ai
  • Distributed tracing / context propagation
  • Hybrid tracing (sending to multiple backends)

Supported Instrumentation Libraries

  • Table or list of compatible OTel instrumentation SDKs
  • Coverage matrix (OpenLLMetry, OpenLIT, Arize, etc.)

Troubleshooting

  • Common errors and solutions
  • Version requirements
  • Protocol limitations
  • Self-hosting considerations (if applicable)

Best Practices

  • Filtering strategies
  • Metadata organization
  • Performance considerations
  • Security recommendations