Skip to main content

AI Router

Overview

LangGraph is a framework for building stateful, multi-actor AI applications with LLMs. It extends LangChain with graph-based agent orchestration, cycles, and controllability. By connecting LangGraph to Orq.ai’s AI Router, you get production-ready agentic workflows with access to 300+ models.

Key Benefits

Orq.ai’s AI Router enhances your LangGraph applications with:

Complete Observability

Track every agent step, tool use, and graph transition with detailed traces

Built-in Reliability

Automatic fallbacks, retries, and load balancing for production resilience

Cost Optimization

Real-time cost tracking and spend management across all your AI operations

Multi-Provider Access

Access 300+ LLMs and 20+ providers through a single, unified integration

Prerequisites

Before integrating LangGraph with Orq.ai, ensure you have:
  • An Orq.ai account and API Key
  • Python 3.8 or higher
To setup your API key, see API keys & Endpoints.

Installation

pip install langgraph langchain-openai langchain-core

Configuration

Configure LangGraph to use Orq.ai’s AI Router by passing a ChatOpenAI instance with a custom base_url:
Python
from langchain_openai import ChatOpenAI
import os

llm = ChatOpenAI(
    model="gpt-4o",
    api_key=os.getenv("ORQ_API_KEY"),
    base_url="https://api.orq.ai/v2/router",
)
base_url: https://api.orq.ai/v2/router

Basic Agent Example

Here’s a complete example using create_agent with a tool:
Python
from langchain.agents import create_agent
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
import os

llm = ChatOpenAI(
    model="gpt-4o",
    api_key=os.getenv("ORQ_API_KEY"),
    base_url="https://api.orq.ai/v2/router",
)

@tool
def get_weather(location: str) -> str:
    """Get the current weather for a location."""
    return f"The weather in {location} is sunny and 72°F"

agent = create_agent(llm, tools=[get_weather])

result = agent.invoke({"messages": [("user", "What's the weather in San Francisco?")]})
print(result["messages"][-1].content)

Agent with Multiple Tools

Python
from langchain.agents import create_agent
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
import os

llm = ChatOpenAI(
    model="gpt-4o",
    api_key=os.getenv("ORQ_API_KEY"),
    base_url="https://api.orq.ai/v2/router",
)

@tool
def get_weather(location: str) -> str:
    """Get the current weather for a location."""
    return f"The weather in {location} is sunny and 72°F"

@tool
def add(a: int, b: int) -> int:
    """Add two integers."""
    return a + b

@tool
def multiply(a: int, b: int) -> int:
    """Multiply two integers."""
    return a * b

agent = create_agent(
    llm,
    tools=[get_weather, add, multiply],
    system_prompt="You are a helpful assistant with access to weather and math tools.",
)

result = agent.invoke({
    "messages": [("user", "What is 15 * 4? Also check the weather in Tokyo.")]
})
print(result["messages"][-1].content)

Streaming

Stream agent steps as they happen:
Python
from langchain.agents import create_agent
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
import os

llm = ChatOpenAI(
    model="gpt-4o",
    api_key=os.getenv("ORQ_API_KEY"),
    base_url="https://api.orq.ai/v2/router",
)

@tool
def get_weather(location: str) -> str:
    """Get the current weather for a location."""
    return f"The weather in {location} is sunny and 72°F"

agent = create_agent(llm, tools=[get_weather])

for chunk in agent.stream(
    {"messages": [("user", "What's the weather in Paris?")]},
    stream_mode="updates",
):
    print(chunk)

Model Selection

With Orq.ai, you can use any supported model from 20+ providers:
Python
from langchain.agents import create_agent
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
import os

@tool
def get_weather(location: str) -> str:
    """Get the current weather for a location."""
    return f"The weather in {location} is sunny and 72°F"

# Use Claude
claude_agent = create_agent(
    ChatOpenAI(
        model="claude-sonnet-4-5-20250929",
        api_key=os.getenv("ORQ_API_KEY"),
        base_url="https://api.orq.ai/v2/router",
    ),
    tools=[get_weather],
)

# Use Gemini
gemini_agent = create_agent(
    ChatOpenAI(
        model="gemini-2.5-flash",
        api_key=os.getenv("ORQ_API_KEY"),
        base_url="https://api.orq.ai/v2/router",
    ),
    tools=[get_weather],
)

result = claude_agent.invoke({"messages": [("user", "What's the weather in London?")]})
print(result["messages"][-1].content)

Observability

Asset Capture in the Control Tower

When you instrument LangGraph with OpenTelemetry and send traces to Orq.ai, agents, tools, and models are automatically extracted from the spans and registered in Control Tower.

Installation

pip install opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp \
            langchain langchain-openai langchain-core langgraph

Configuration

Python
import os
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter

os.environ["OTEL_EXPORTER_OTLP_ENDPOINT"] = "https://api.orq.ai/v2/otel"
os.environ["OTEL_EXPORTER_OTLP_HEADERS"]  = "Authorization=Bearer <your-orq-api-key>"

os.environ["LANGSMITH_OTEL_ENABLED"] = "true"
os.environ["LANGSMITH_TRACING"]      = "true"
os.environ["LANGSMITH_OTEL_ONLY"]    = "true"

provider = TracerProvider()
provider.add_span_processor(BatchSpanProcessor(OTLPSpanExporter()))
trace.set_tracer_provider(provider)

Examples

Agent with a single tool — captures agent/weather_agent, tool/get_weather, model/gpt-4o-mini
Python
from typing import Annotated, Literal, TypedDict
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
from langchain.tools import tool
from langgraph.graph import StateGraph, START
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode

class State(TypedDict):
    messages: Annotated[list, add_messages]

@tool
def get_weather(location: str) -> str:
    """Get weather for a location."""
    data = {"tokyo": "Sunny, 22°C", "paris": "Cloudy, 15°C", "new york": "Rainy, 18°C"}
    return data.get(location.lower(), f"No data for {location}")

tools = [get_weather]
llm   = ChatOpenAI(model="gpt-4o-mini", temperature=0).bind_tools(tools)

def weather_agent(state: State):
    return {"messages": [llm.invoke(state["messages"])]}

def route(state: State) -> Literal["tools", "__end__"]:
    last = state["messages"][-1]
    return "tools" if hasattr(last, "tool_calls") and last.tool_calls else "__end__"

graph = StateGraph(State)
graph.add_node("weather_agent", weather_agent)
graph.add_node("tools", ToolNode(tools))
graph.add_edge(START, "weather_agent")
graph.add_conditional_edges("weather_agent", route)
graph.add_edge("tools", "weather_agent")

app = graph.compile()
result = app.invoke({"messages": [HumanMessage(content="Weather in Tokyo?")]})
print(result["messages"][-1].content)
Agent with multiple tools — captures agent/assistant_agent, tool/calculator, tool/get_time, model/gpt-4o-mini
Python
from datetime import datetime

@tool
def calculator(expression: str) -> str:
    """Evaluate a math expression."""
    # WARNING: This is a simplified example for demonstration only.
    # In production, use ast.literal_eval() or a dedicated math parser
    # library instead of eval() to prevent code injection attacks.
    try:
        return str(eval(expression))
    except Exception:
        return "Error"

@tool
def get_time() -> str:
    """Get the current time."""
    return datetime.now().strftime("%H:%M:%S")

tools = [calculator, get_time]
llm   = ChatOpenAI(model="gpt-4o-mini", temperature=0).bind_tools(tools)

def assistant_agent(state: State):
    return {"messages": [llm.invoke(state["messages"])]}

graph = StateGraph(State)
graph.add_node("assistant_agent", assistant_agent)
graph.add_node("tools", ToolNode(tools))
graph.add_edge(START, "assistant_agent")
graph.add_conditional_edges("assistant_agent", route)
graph.add_edge("tools", "assistant_agent")

app = graph.compile()
result = app.invoke({"messages": [HumanMessage(content="What is 25 * 4? Also what time is it?")]})
print(result["messages"][-1].content)
Agent with a different model, no tools — captures agent/smart_agent, model/gpt-4o
Python
from langgraph.graph import END

llm = ChatOpenAI(model="gpt-4o", temperature=0.5, max_tokens=100)

def smart_agent(state: State):
    return {"messages": [llm.invoke(state["messages"])]}

graph = StateGraph(State)
graph.add_node("smart_agent", smart_agent)
graph.add_edge(START, "smart_agent")
graph.add_edge("smart_agent", END)

app = graph.compile()
result = app.invoke({"messages": [HumanMessage(content="Explain quantum computing briefly.")]})
print(result["messages"][-1].content)