Orq MCP is live: Use natural language to interrogate traces, spot regressions, and experiment your way to optimal AI configurations. Available in Claude Desktop, Claude Code, Cursor, and more. Start now →
Use this file to discover all available pages before exploring further.
AI Router
Route your LLM calls through the AI Router with a single base URL change. Zero vendor lock-in: always run on the best model at the lowest cost for your use case.
LlamaIndex Agents is an agentic framework built on top of LlamaIndex that enables LLMs to reason, use tools, and execute multi-step workflows. By connecting LlamaIndex Agents to Orq.ai’s AI Router, you get production-ready agents with enterprise-grade capabilities without changing your existing code.
Here’s a complete example using ReActAgent with a simple tool:
Python
import asynciofrom llama_index.core.agent.workflow import ReActAgentfrom llama_index.core.tools import FunctionToolfrom llama_index.llms.openai_like import OpenAILikeimport os# Configure LLM with Orq.ai AI Routerllm = OpenAILike( model="gpt-4o", api_key=os.getenv("ORQ_API_KEY"), api_base="https://api.orq.ai/v3/router", is_chat_model=True,)# Define a tooldef get_weather(location: str) -> str: """Get the current weather for a location.""" return f"The weather in {location} is sunny and 72°F"# Create agentagent = ReActAgent( tools=[FunctionTool.from_defaults(get_weather)], llm=llm,)async def main(): response = await agent.run("What's the weather in San Francisco?") print(response)asyncio.run(main())
Build agents with multiple tools for complex reasoning:
Python
import asynciofrom llama_index.core.agent.workflow import ReActAgentfrom llama_index.core.tools import FunctionToolfrom llama_index.llms.openai_like import OpenAILikeimport osllm = OpenAILike( model="gpt-4o", api_key=os.getenv("ORQ_API_KEY"), api_base="https://api.orq.ai/v3/router", is_chat_model=True,)def add(a: int, b: int) -> int: """Add two integers.""" return a + bdef multiply(a: int, b: int) -> int: """Multiply two integers.""" return a * bdef get_company_info(company: str) -> str: """Get basic information about a company.""" companies = { "openai": "OpenAI is an AI research company founded in 2015.", "anthropic": "Anthropic is an AI safety company founded in 2021.", } return companies.get(company.lower(), f"No information found for {company}")agent = ReActAgent( tools=[ FunctionTool.from_defaults(add), FunctionTool.from_defaults(multiply), FunctionTool.from_defaults(get_company_info), ], llm=llm, system_prompt="You are a helpful assistant with access to math and company information tools.",)async def main(): response = await agent.run("What is 15 * 4, and tell me about Anthropic?") print(response)asyncio.run(main())
With Orq.ai, you can use any supported model from 20+ providers:
Python
import asynciofrom llama_index.core.agent.workflow import ReActAgentfrom llama_index.core.tools import FunctionToolfrom llama_index.llms.openai_like import OpenAILikeimport osdef get_time(timezone: str) -> str: """Get the current time in a timezone.""" return f"The current time in {timezone} is 14:30"# Use Claudeclaude_llm = OpenAILike( model="claude-sonnet-4-5-20250929", api_key=os.getenv("ORQ_API_KEY"), api_base="https://api.orq.ai/v3/router", is_chat_model=True,)# Use Geminigemini_llm = OpenAILike( model="gemini-2.5-flash", api_key=os.getenv("ORQ_API_KEY"), api_base="https://api.orq.ai/v3/router", is_chat_model=True,)agent = ReActAgent( tools=[FunctionTool.from_defaults(get_time)], llm=claude_llm,)async def main(): response = await agent.run("What time is it in Tokyo?") print(response)asyncio.run(main())