Skip to main content

AI Router

Overview

LiveKit Agents is a framework for building real-time voice and multimodal AI agents that communicate over WebRTC. By connecting LiveKit Agents to Orq.ai’s AI Router, you get production-ready voice AI with enterprise-grade LLM access without vendor lock-in.

Key Benefits

Orq.ai’s AI Router enhances your LiveKit Agents with:

Complete Observability

Track every LLM call, tool use, and agent interaction with detailed traces

Built-in Reliability

Automatic fallbacks, retries, and load balancing for production resilience

Cost Optimization

Real-time cost tracking and spend management across all your AI operations

Multi-Provider Access

Access 300+ LLMs and 20+ providers through a single, unified integration

Prerequisites

Before integrating LiveKit Agents with Orq.ai, ensure you have:
  • An Orq.ai account and API Key
  • Python 3.9 or higher
  • A LiveKit account with URL, API key, and API secret
To setup your API key, see API keys & Endpoints.

Installation

Install LiveKit Agents with the OpenAI plugin:
pip install "livekit-agents[openai]~=1.0"

Configuration

Configure LiveKit Agents to use Orq.ai’s AI Router via the OpenAI plugin’s base_url parameter:
Python
from livekit.plugins import openai
import os

# Configure OpenAI-compatible LLM with Orq.ai AI Router
llm = openai.LLM(
    model="gpt-4o",
    base_url="https://api.orq.ai/v2/router",
    api_key=os.getenv("ORQ_API_KEY"),
)
base_url: https://api.orq.ai/v2/router

Environment Variables

Set up your LiveKit and Orq.ai credentials:
export LIVEKIT_URL="wss://your-project.livekit.cloud"
export LIVEKIT_API_KEY="your-livekit-api-key"
export LIVEKIT_API_SECRET="your-livekit-api-secret"
export ORQ_API_KEY="your-orq-api-key"

Basic Voice Agent

Here’s a complete example of a voice agent using Orq.ai’s AI Router:
Python
import os
from livekit.agents import Agent, AgentSession, WorkerOptions, cli
from livekit.plugins import openai

class Assistant(Agent):
    def __init__(self):
        super().__init__(
            instructions="You are a helpful voice assistant.",
        )

async def entrypoint(ctx):
    session = AgentSession(
        llm=openai.LLM(
            model="gpt-4o",
            base_url="https://api.orq.ai/v2/router",
            api_key=os.getenv("ORQ_API_KEY"),
        ),
    )
    await session.start(ctx.room, agent=Assistant())

if __name__ == "__main__":
    cli.run_app(WorkerOptions(entrypoint_fnc=entrypoint))

Agent with Function Tools

Add tools to your voice agent for dynamic responses:
Python
import os
from livekit.agents import Agent, AgentSession, WorkerOptions, cli, function_tool
from livekit.plugins import openai

@function_tool
async def get_weather(location: str) -> str:
    """Get the current weather for a location."""
    return f"The weather in {location} is sunny and 72°F"

@function_tool
async def get_time(timezone: str) -> str:
    """Get the current time in a timezone."""
    return f"The current time in {timezone} is 14:30"

class Assistant(Agent):
    def __init__(self):
        super().__init__(
            instructions="You are a helpful voice assistant with access to weather and time tools.",
            tools=[get_weather, get_time],
        )

async def entrypoint(ctx):
    session = AgentSession(
        llm=openai.LLM(
            model="gpt-4o",
            base_url="https://api.orq.ai/v2/router",
            api_key=os.getenv("ORQ_API_KEY"),
        ),
    )
    await session.start(ctx.room, agent=Assistant())

if __name__ == "__main__":
    cli.run_app(WorkerOptions(entrypoint_fnc=entrypoint))

Observability

Installation

pip install opentelemetry-api \
    opentelemetry-sdk \
    "opentelemetry-exporter-otlp-proto-http" \
    "livekit-agents[openai]~=1.0"
LiveKit Agents has built-in OTEL support via livekit.agents.telemetry. No additional instrumentation package is required.

Configuring Orq.ai Observability

Use set_tracer_provider from livekit.agents.telemetry to register the exporter. Call it before your agent entrypoint starts:
import os
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from livekit.agents.telemetry import set_tracer_provider

exporter = OTLPSpanExporter(
    endpoint="https://api.orq.ai/v2/otel/v1/traces",
    headers={"Authorization": f"Bearer {os.environ['ORQ_API_KEY']}"},
)
provider = TracerProvider()
provider.add_span_processor(BatchSpanProcessor(exporter))
set_tracer_provider(provider)
LiveKit uses livekit.agents.telemetry.set_tracer_provider, not the standard opentelemetry.trace.set_tracer_provider. BatchSpanProcessor is preferred over SimpleSpanProcessor for production voice workloads.

Basic Example

import os
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from livekit.agents.telemetry import set_tracer_provider
from livekit.agents import AutoSubscribe, JobContext, WorkerOptions, cli
from livekit.plugins import openai


def setup_tracing():
    exporter = OTLPSpanExporter(
        endpoint="https://api.orq.ai/v2/otel/v1/traces",
        headers={"Authorization": f"Bearer {os.environ['ORQ_API_KEY']}"},
    )
    provider = TracerProvider()
    provider.add_span_processor(BatchSpanProcessor(exporter))
    set_tracer_provider(provider)


async def entrypoint(ctx: JobContext):
    await ctx.connect(auto_subscribe=AutoSubscribe.AUDIO_ONLY)

    assistant = openai.realtime.RealtimeModel(voice="alloy")
    agent = openai.realtime.RealtimeAgent(model=assistant)
    agent.start(ctx.room)


if __name__ == "__main__":
    setup_tracing()
    cli.run_app(WorkerOptions(entrypoint_fnc=entrypoint))