Skip to main content

AI Router

Overview

LangChain is a framework for building LLM-powered applications through composable chains, agents, and integrations with external data sources. By connecting LangChain to Orq.ai’s AI Router, you access 300+ models through a single base URL change.

Key Benefits

Orq.ai’s AI Router enhances your LangChain applications with:

Complete Observability

Track every chain step, tool use, and LLM call with detailed traces

Built-in Reliability

Automatic fallbacks, retries, and load balancing for production resilience

Cost Optimization

Real-time cost tracking and spend management across all your AI operations

Multi-Provider Access

Access 300+ LLMs and 20+ providers through a single, unified integration

Prerequisites

Before integrating LangChain with Orq.ai, ensure you have:
  • An Orq.ai account and API Key
  • Python 3.8 or higher
To setup your API key, see API keys & Endpoints.

Installation

pip install langchain langchain-openai

Configuration

Configure LangChain to use Orq.ai’s AI Router via ChatOpenAI with a custom base_url:
Python
from langchain_openai import ChatOpenAI
import os

llm = ChatOpenAI(
    model="gpt-4o",
    api_key=os.getenv("ORQ_API_KEY"),
    base_url="https://api.orq.ai/v2/router",
)
base_url: https://api.orq.ai/v2/router

Basic Example

Python
from langchain_openai import ChatOpenAI
import os

llm = ChatOpenAI(
    model="gpt-4o",
    api_key=os.getenv("ORQ_API_KEY"),
    base_url="https://api.orq.ai/v2/router",
)

result = llm.invoke("Explain quantum computing in simple terms.")
print(result.content)

Chains

Build composable chains using LangChain’s pipe operator:
Python
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
import os

llm = ChatOpenAI(
    model="gpt-4o",
    api_key=os.getenv("ORQ_API_KEY"),
    base_url="https://api.orq.ai/v2/router",
)

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    ("user", "{input}"),
])

chain = prompt | llm
result = chain.invoke({"input": "Tell me a joke about programming."})
print(result.content)

Streaming

Python
from langchain_openai import ChatOpenAI
import os

llm = ChatOpenAI(
    model="gpt-4o",
    api_key=os.getenv("ORQ_API_KEY"),
    base_url="https://api.orq.ai/v2/router",
)

for chunk in llm.stream("Write a short poem about the ocean."):
    print(chunk.content, end="", flush=True)
print()

Model Selection

With Orq.ai, you can use any supported model from 20+ providers:
Python
from langchain_openai import ChatOpenAI
import os

# Use Claude
claude = ChatOpenAI(
    model="claude-sonnet-4-5-20250929",
    api_key=os.getenv("ORQ_API_KEY"),
    base_url="https://api.orq.ai/v2/router",
)

# Use Gemini
gemini = ChatOpenAI(
    model="gemini-2.5-flash",
    api_key=os.getenv("ORQ_API_KEY"),
    base_url="https://api.orq.ai/v2/router",
)

# Use Groq
groq = ChatOpenAI(
    model="groq/llama-3.3-70b-versatile",
    api_key=os.getenv("ORQ_API_KEY"),
    base_url="https://api.orq.ai/v2/router",
)

Observability

Getting Started

Step 1: Install dependencies

Run the following command to install the required libraries:
pip install langchain langchain-openai langgraph

Step 2: Configure Environment Variables

You will need: • An Orq.ai API Key (from your Orq.ai workspace). If you don’t have an account, sign up here. • An API key for the LLM provider you’ll be using (e.g., OpenAI). Set the following environment variables:
import os

# Orq.ai OpenTelemetry exporter
os.environ["OTEL_EXPORTER_OTLP_ENDPOINT"] = "https://api.orq.ai/v2/otel"
os.environ["OTEL_EXPORTER_OTLP_HEADERS"] = "Authorization=Bearer $ORQ_API_KEY"

# Enable LangSmith tracing in OTEL-only mode
os.environ["LANGSMITH_OTEL_ENABLED"] = "true"
os.environ["LANGSMITH_TRACING"] = "true"
os.environ["LANGSMITH_OTEL_ONLY"] = "true"

# OpenAI API key
os.environ["OPENAI_API_KEY"] = "$OPENAI_API_KEY"
Once set, all LangChain and LangGraph traces will automatically be sent to your Orq.ai workspace.

Step 3: Sending traces to Orq

Here’s an example of running a simple LangGraph ReAct agent with a custom tool..
from langgraph.prebuilt import create_react_agent

def get_weather(city: str) -> str:
    """Get weather for a given city."""
    return f"It's always sunny in {city}!"

agent = create_react_agent(
    model="openai:gpt-5-mini",
    tools=[get_weather],
    prompt="You are a helpful assistant.",
)

# Run the agent — this will generate traces in Orq.ai
agent.invoke(
    {"messages": [{"role": "user", "content": "What is the weather in San Francisco?"}]}
)

More Examples

Sending Traces with LangChain

The following snippet shows how to create and run a simple LangChain app that also sends traces to Orq.ai.
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate

# Define a prompt and chain
prompt = ChatPromptTemplate.from_template("Tell me a {action} about {topic}")
model = ChatOpenAI(temperature=0.7)
chain = prompt | model

# Invoke the chain
result = chain.invoke({"topic": "programming", "action": "joke"})
print(result.content)
At this point, you should see traces from both LangGraph and LangChain examples appear in your Orq.ai workspace.