Documentation Index
Fetch the complete documentation index at: https://docs.orq.ai/llms.txt
Use this file to discover all available pages before exploring further.
Getting Started
OpenClaw is an open-source personal AI assistant that runs locally on your machine, connecting LLMs to messaging platforms and system tools. OpenClaw has a built-indiagnostics-otel plugin that exports traces, metrics, and logs over OTLP/HTTP — just enable the plugin and point it at Orq.ai.
Prerequisites
Before you begin, ensure you have:- An Orq.ai account and API Key
- OpenClaw installed and running locally
- Node.js 20+
Install OpenClaw
Configure OpenClaw
OpenClaw’s OTEL export is configured through~/.openclaw/openclaw.json. You need to enable the diagnostics-otel plugin and configure the diagnostics.otel section.
Step 1: Enable the plugin
You can enable the plugin via the CLI:~/.openclaw/openclaw.json:
Step 2: Configure the OTEL exporter
Add thediagnostics section to ~/.openclaw/openclaw.json, pointing the endpoint at Orq.ai:
Step 3: Run OpenClaw
Start the OpenClaw gateway:What Gets Traced
OpenClaw’sdiagnostics-otel plugin emits spans for model usage, message processing, webhook handling, and tool execution. Orq.ai automatically detects and processes these spans, extracting:
| Attribute | Description |
|---|---|
openclaw.model | The LLM model used |
openclaw.provider | The LLM provider (e.g., anthropic, openai) |
openclaw.sessionId | Conversation session identifier |
openclaw.sessionKey | Session key for conversation grouping |
openclaw.tokens.input | Input token count |
openclaw.tokens.output | Output token count |
openclaw.tokens.total | Total token count |
openclaw.tokens.cache_read | Cached token count |
openclaw.channel | Messaging channel (e.g., webchat, telegram) |
openclaw.outcome | Processing outcome |
Exported Spans
openclaw.model.usage— LLM inference calls with token usage, cost, and model detailsopenclaw.message.processed— End-to-end message processing with outcome and durationopenclaw.webhook.processed— Webhook handling for messaging platform integrationsopenclaw.session.stuck— Session state warnings
Configuration Options
| Option | Default | Description |
|---|---|---|
diagnostics.otel.enabled | false | Enable OTEL export |
diagnostics.otel.endpoint | — | OTLP/HTTP endpoint URL |
diagnostics.otel.protocol | http/protobuf | Export protocol (only http/protobuf is supported) |
diagnostics.otel.headers | {} | Headers for authentication |
diagnostics.otel.serviceName | — | Service name for the resource |
diagnostics.otel.traces | true | Export traces |
diagnostics.otel.metrics | true | Export metrics |
diagnostics.otel.logs | false | Export logs (can be high volume) |
diagnostics.otel.captureContent | false | Capture message content in spans (sensitive data) |
diagnostics.otel.sampleRate | 1.0 | Trace sampling rate (0.0–1.0) |
diagnostics.otel.flushIntervalMs | 60000 | Metric export interval in ms (min 1000) |
Next Steps
Verify Traces in the Studio.No additional instrumentation libraries are needed — OpenClaw includes OpenTelemetry support via the built-in
diagnostics-otel plugin.