Skip to main content

Getting Started

OpenClaw is an open-source personal AI assistant that runs locally on your machine, connecting LLMs to messaging platforms and system tools. OpenClaw has a built-in diagnostics-otel plugin that exports traces, metrics, and logs over OTLP/HTTP — just enable the plugin and point it at Orq.ai.
OpenClaw’s OpenTelemetry support currently requires some manual configuration to get working end-to-end. We are actively working with the OpenClaw team to improve this — there is a live PR to streamline the integration. If better OTEL support matters to you, please leave a comment on the PR to help prioritize it.

Prerequisites

Before you begin, ensure you have:
  • An Orq.ai account and API Key
  • OpenClaw installed and running locally
  • Node.js 20+

Install OpenClaw

# Quick install
curl -fsSL https://openclaw.ai/install.sh | bash

# Or via npm
npm install -g openclaw

Configure OpenClaw

OpenClaw’s OTEL export is configured through ~/.openclaw/openclaw.json. You need to enable the diagnostics-otel plugin and configure the diagnostics.otel section.

Step 1: Enable the plugin

You can enable the plugin via the CLI:
openclaw plugins enable diagnostics-otel
Or add it directly to ~/.openclaw/openclaw.json:
{
  "plugins": {
    "allow": ["diagnostics-otel"],
    "entries": {
      "diagnostics-otel": {
        "enabled": true
      }
    }
  }
}

Step 2: Configure the OTEL exporter

Add the diagnostics section to ~/.openclaw/openclaw.json, pointing the endpoint at Orq.ai:
{
  "diagnostics": {
    "enabled": true,
    "otel": {
      "enabled": true,
      "endpoint": "https://api.orq.ai/v2/otel",
      "protocol": "http/protobuf",
      "headers": {
        "Authorization": "Bearer <ORQ_API_KEY>"
      },
      "serviceName": "openclaw",
      "traces": true,
      "metrics": true,
      "logs": false,
      "sampleRate": 1,
      "flushIntervalMs": 5000
    }
  }
}

Step 3: Run OpenClaw

Start the OpenClaw gateway:
openclaw gateway run

What Gets Traced

OpenClaw’s diagnostics-otel plugin emits spans for model usage, message processing, webhook handling, and tool execution. Orq.ai automatically detects and processes these spans, extracting:
AttributeDescription
openclaw.modelThe LLM model used
openclaw.providerThe LLM provider (e.g., anthropic, openai)
openclaw.sessionIdConversation session identifier
openclaw.sessionKeySession key for conversation grouping
openclaw.tokens.inputInput token count
openclaw.tokens.outputOutput token count
openclaw.tokens.totalTotal token count
openclaw.tokens.cache_readCached token count
openclaw.channelMessaging channel (e.g., webchat, telegram)
openclaw.outcomeProcessing outcome

Exported Spans

  • openclaw.model.usage — LLM inference calls with token usage, cost, and model details
  • openclaw.message.processed — End-to-end message processing with outcome and duration
  • openclaw.webhook.processed — Webhook handling for messaging platform integrations
  • openclaw.session.stuck — Session state warnings

Configuration Options

OptionDefaultDescription
diagnostics.otel.enabledfalseEnable OTEL export
diagnostics.otel.endpointOTLP/HTTP endpoint URL
diagnostics.otel.protocolhttp/protobufExport protocol (only http/protobuf is supported)
diagnostics.otel.headers{}Headers for authentication
diagnostics.otel.serviceNameService name for the resource
diagnostics.otel.tracestrueExport traces
diagnostics.otel.metricstrueExport metrics
diagnostics.otel.logsfalseExport logs (can be high volume)
diagnostics.otel.captureContentfalseCapture message content in spans (sensitive data)
diagnostics.otel.sampleRate1.0Trace sampling rate (0.0–1.0)
diagnostics.otel.flushIntervalMs60000Metric export interval in ms (min 1000)

Next Steps

Verify Traces in the Studio.
No additional instrumentation libraries are needed — OpenClaw includes OpenTelemetry support via the built-in diagnostics-otel plugin.