Observability
orq.ai offers comprehensive Observability capabilities for monitoring your AI applications and deployments.
Core Observability Features
Built-in Monitoring
Once your Deployments are in use, orq.ai automatically provides:
- Logs - Record requests, responses, errors, and performance metrics
- Traces - Track end-to-end request flows across services and operations
- Threads - Monitor concurrent execution contexts and conversation flows
OpenTelemetry Integration
For advanced observability, orq.ai supports industry-standard OpenTelemetry for detailed application monitoring:
- Observability Frameworks - Quick setup guide for sending OpenTelemetry traces to orq.ai
- OTLP Endpoint:
https://api.orq.ai/v2/otel
- Standard OpenTelemetry Protocol endpoint - Multi-Language Support - Python, Node.js, Java, and other OpenTelemetry-supported languages
OpenTelemetry Use Cases
Application Performance Monitoring
- Track LLM request latencies and token usage
- Monitor embedding generation and vector operations
- Analyze RAG pipeline performance
AI Agent Observability
- Trace multi-step agent workflows and tool usage
- Monitor conversation flows and context management
- Debug complex agentic interactions
Custom Instrumentation
- Add spans for business logic and custom operations
- Track model performance across different environments
- Implement distributed tracing across microservices
Updated 16 days ago
What’s Next