Orq MCP is live: Use natural language to interrogate traces, spot regressions, and experiment your way to optimal AI configurations. Available in Claude Desktop, Claude Code, Cursor, and more. Start now →
Use this file to discover all available pages before exploring further.
AI Router
Route your LLM calls through the AI Router with a single base URL change. Zero vendor lock-in: always run on the best model at the lowest cost for your use case.
Observability
Instrument your code with OpenTelemetry to capture traces, logs, and metrics for every LLM call, agent step, and tool use.
LangGraph is a framework for building stateful, multi-actor AI applications with LLMs. It extends LangChain with graph-based agent orchestration, cycles, and controllability. By connecting LangGraph to Orq.ai’s AI Router, you get production-ready agentic workflows with access to 300+ models.
Here’s a complete example using create_agent with a tool:
Python
from langchain.agents import create_agentfrom langchain_openai import ChatOpenAIfrom langchain_core.tools import toolimport osllm = ChatOpenAI( model="gpt-4o", api_key=os.getenv("ORQ_API_KEY"), base_url="https://api.orq.ai/v3/router",)@tooldef get_weather(location: str) -> str: """Get the current weather for a location.""" return f"The weather in {location} is sunny and 72°F"agent = create_agent(llm, tools=[get_weather])result = agent.invoke({"messages": [("user", "What's the weather in San Francisco?")]})print(result["messages"][-1].content)
from langchain.agents import create_agentfrom langchain_openai import ChatOpenAIfrom langchain_core.tools import toolimport osllm = ChatOpenAI( model="gpt-4o", api_key=os.getenv("ORQ_API_KEY"), base_url="https://api.orq.ai/v3/router",)@tooldef get_weather(location: str) -> str: """Get the current weather for a location.""" return f"The weather in {location} is sunny and 72°F"@tooldef add(a: int, b: int) -> int: """Add two integers.""" return a + b@tooldef multiply(a: int, b: int) -> int: """Multiply two integers.""" return a * bagent = create_agent( llm, tools=[get_weather, add, multiply], system_prompt="You are a helpful assistant with access to weather and math tools.",)result = agent.invoke({ "messages": [("user", "What is 15 * 4? Also check the weather in Tokyo.")]})print(result["messages"][-1].content)
from langchain.agents import create_agentfrom langchain_openai import ChatOpenAIfrom langchain_core.tools import toolimport osllm = ChatOpenAI( model="gpt-4o", api_key=os.getenv("ORQ_API_KEY"), base_url="https://api.orq.ai/v3/router",)@tooldef get_weather(location: str) -> str: """Get the current weather for a location.""" return f"The weather in {location} is sunny and 72°F"agent = create_agent(llm, tools=[get_weather])for chunk in agent.stream( {"messages": [("user", "What's the weather in Paris?")]}, stream_mode="updates",): print(chunk)
With Orq.ai, you can use any supported model from 20+ providers:
Python
from langchain.agents import create_agentfrom langchain_openai import ChatOpenAIfrom langchain_core.tools import toolimport os@tooldef get_weather(location: str) -> str: """Get the current weather for a location.""" return f"The weather in {location} is sunny and 72°F"# Use Claudeclaude_agent = create_agent( ChatOpenAI( model="claude-sonnet-4-5-20250929", api_key=os.getenv("ORQ_API_KEY"), base_url="https://api.orq.ai/v3/router", ), tools=[get_weather],)# Use Geminigemini_agent = create_agent( ChatOpenAI( model="gemini-2.5-flash", api_key=os.getenv("ORQ_API_KEY"), base_url="https://api.orq.ai/v3/router", ), tools=[get_weather],)result = claude_agent.invoke({"messages": [("user", "What's the weather in London?")]})print(result["messages"][-1].content)
orq_ai_sdk.langchain provides a global setup() function that automatically instruments all LangGraph components. Call it once at the top of your application and every LLM call, graph node, tool execution, and retrieval is traced automatically, no callback wiring needed.
Zero configuration
One setup() call and tracing is live, no callbacks, no OpenTelemetry exporters, no extra wiring.
Full graph visibility
Traces preserve the parent-child structure of your graph so you see exactly which node triggered each LLM call or tool use.
Token usage and costs
Input and output token counts are captured on every LLM call and synced to Orq.ai for cost tracking.
Asset Capture
Agents, tools, and models are automatically registered in Control Tower from your traces.
Traces appear in the Orq.ai Studio under the Traces tab. Each run is captured as a tree reflecting your graph structure: top-level chain spans for each node, with LLM calls, tool executions, and retrievals nested underneath.