
π Enhanced Observability & Debugging
Human-in-the-Loop Reviews
- Collect structured feedback on AI outputs with customizable Human Review Sets per trace type
- Directly add spans to datasets for continuous improvement
- Track contact IDs and thread context across chat completions
Faster Root Cause Analysis
- View retrieval configurations directly in span properties
- See evaluator names on spans for quick performance assessment
- Expanded OpenTelemetry support for more frameworks
Cost Optimization
- Optional response caching to reduce latency and API costs
- Fixed cost aggregation for image operations and Azure OpenAI
- More accurate token and billing tracking
π§ͺ Streamlined Experimentation
Improved Experiment Management
- Search across experiment entries
- Protection against accidental re-runs
- Persistent column settings and better cancellation handling
- Enhanced UI with clearer active states and progress indicators
π AI Gateway Enhancements
Advanced Request Handling
- Automatic retries and fallback models for improved reliability
- Thread and contact tracking for conversation continuity
- Specify prompt versions directly in LLM calls
- Improved SSE streaming performance
π° Budget Controls
Workspace-Level Cost Management
- Set and monitor budgets at workspace and contact levels
- New Budgets API for programmatic cost control
- Automated alerts and spending limits
π― Platform Improvements
Model Management
- New image generation models and providers
- Intelligent model filtering based on capabilities
- Improved cost extraction and model selection UI
Developer Experience
- Better API parameter documentation for Knowledge Base
- Unsaved changes protection across Teams and Contacts
- Improved error handling and retry logic throughout
Today, we are bringing all the power Deployments to our AI Gateway. Now, teams will be able to run their AI workloads in a reliable and battle-tested AI Gateway.Features supported via the Gateway:
- Fallbacks
- Retry
- Contact Tracking
- Thread Management
- Cache
- Knowledge Bases
cURL
Start building today, to learn more, see the AI Gateway.

- Complete request tracing across LLM calls, chain executions, and agent workflows
- Automatic instrumentation for latency, token usage, and error tracking
- Zero-code instrumentation for supported frameworks
- Identify bottlenecks in complex multi-step AI workflows
- Track costs with token-level granularity
- Debug agent reasoning paths and tool usage in production
- Correlate AI operations with upstream/downstream services
Supported frameworks
- Agno
- AutoGen
- BeeAI
- CrewAI
- DSPy
- Google ADK
- Haystack
- Instructor
- LangChain / LangGraph
- LiteLLM
- LiveKit
- LlamaIndex
- Mastra
- OpenAI Agents
- Pydantic AI
- Semantic Kernel
- SmolAgents
- Vercel AI SDK
A Vercel AI SDK provider for Orq AI platform that enables seamless integration of AI models with the Vercel AI SDK ecosystem.
π― FeaturesFind more info in the Github Repository

- Full Vercel AI SDK Compatibility: Works with all Vercel AI SDK functions (generateText, streamText, embed, etc.)
- Multiple Model Types: Support for chat, completion, embedding, and image generation models
- Streaming Support: Real-time streaming responses for a better user experience
- Type-safe: Fully written in TypeScript with comprehensive type definitions
- Orq Platform Integration: Direct access to Orq AIβs model routing and optimization
Installation
Getting Started
Node.js



@traced decorator, a powerful new way to capture function-level traces directly in your Python code.- Automatically logs function inputs, outputs, and metadata
- Supports nested spans and custom span types (LLM, agent, tool, etc.)
- Works seamlessly with the Orq SDK initialization (no separate init required)
- Integrates with OpenTelemetry for end-to-end distributed tracing
Python
To learn more, see our Observability Frameworks.


Examples
Generate image with SeedreamcURL
OpenAI (Python)
Node.js
cURL
OpenAI (Python)
Node.js
Added support for Workspace Budget.Now itβs possible to set a Budget for your workspace to control your AI spend for your organization
