Skip to main content

AI Router

Overview

Microsoft AutoGen is a framework for building multi-agent conversational AI systems with collaborative problem-solving through automated agent interactions. By connecting AutoGen to Orq.ai’s AI Router, you get access to 300+ models for your multi-agent workflows with a single configuration change.

Key Benefits

Orq.ai’s AI Router enhances your AutoGen applications with:

Complete Observability

Track every agent conversation, tool use, and multi-agent interaction

Built-in Reliability

Automatic fallbacks, retries, and load balancing for production resilience

Cost Optimization

Real-time cost tracking and spend management across all your AI operations

Multi-Provider Access

Access 300+ LLMs and 20+ providers through a single, unified integration

Prerequisites

Before integrating AutoGen with Orq.ai, ensure you have:
  • An Orq.ai account and API Key
  • Python 3.10 or higher
To setup your API key, see API keys & Endpoints.

Installation

pip install "autogen-agentchat" "autogen-ext[openai]"

Configuration

Configure AutoGen to use Orq.ai’s AI Router via OpenAIChatCompletionClient with a custom base_url:
Python
import os
from autogen_ext.models.openai import OpenAIChatCompletionClient

model_client = OpenAIChatCompletionClient(
    model="gpt-4o",
    base_url="https://api.orq.ai/v2/router",
    api_key=os.getenv("ORQ_API_KEY"),
    model_info={
        "vision": False,
        "function_calling": True,
        "json_output": True,
        "family": "unknown",
        "structured_output": True,
    },
)
base_url: https://api.orq.ai/v2/router
The model_info dict is required when using a custom base_url so AutoGen knows the model’s capabilities.

Basic Agent Example

Python
import asyncio
import os
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient

async def main():
    model_client = OpenAIChatCompletionClient(
        model="gpt-4o",
        base_url="https://api.orq.ai/v2/router",
        api_key=os.getenv("ORQ_API_KEY"),
        model_info={
            "vision": False,
            "function_calling": True,
            "json_output": True,
            "family": "unknown",
            "structured_output": True,
        },
    )

    agent = AssistantAgent(
        name="assistant",
        model_client=model_client,
        system_message="You are a helpful assistant.",
    )

    result = await agent.run(task="What is quantum computing?")
    print(result.messages[-1].content)
    await model_client.close()

asyncio.run(main())

Multi-Agent Team

Orchestrate multiple specialized agents with RoundRobinGroupChat:
Python
import asyncio
import os
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.conditions import MaxMessageTermination
from autogen_ext.models.openai import OpenAIChatCompletionClient

async def main():
    model_client = OpenAIChatCompletionClient(
        model="gpt-4o",
        base_url="https://api.orq.ai/v2/router",
        api_key=os.getenv("ORQ_API_KEY"),
        model_info={
            "vision": False,
            "function_calling": True,
            "json_output": True,
            "family": "unknown",
            "structured_output": True,
        },
    )

    researcher = AssistantAgent(
        name="researcher",
        model_client=model_client,
        system_message="You research topics and provide key facts.",
    )

    writer = AssistantAgent(
        name="writer",
        model_client=model_client,
        system_message="You write clear summaries based on research.",
    )

    team = RoundRobinGroupChat(
        participants=[researcher, writer],
        termination_condition=MaxMessageTermination(max_messages=2),
    )

    result = await team.run(task="Research and summarize what LLMs are.")
    print(result.messages[-1].content)
    await model_client.close()

asyncio.run(main())

Model Selection

With Orq.ai, you can use any supported model from 20+ providers:
Python
import os
from autogen_ext.models.openai import OpenAIChatCompletionClient

model_info = {
    "vision": False,
    "function_calling": True,
    "json_output": True,
    "family": "unknown",
    "structured_output": True,
}

# Use Claude
claude_client = OpenAIChatCompletionClient(
    model="claude-sonnet-4-5-20250929",
    base_url="https://api.orq.ai/v2/router",
    api_key=os.getenv("ORQ_API_KEY"),
    model_info=model_info,
)

# Use Gemini
gemini_client = OpenAIChatCompletionClient(
    model="gemini-2.5-flash",
    base_url="https://api.orq.ai/v2/router",
    api_key=os.getenv("ORQ_API_KEY"),
    model_info=model_info,
)

# Use Groq
groq_client = OpenAIChatCompletionClient(
    model="groq/llama-3.3-70b-versatile",
    base_url="https://api.orq.ai/v2/router",
    api_key=os.getenv("ORQ_API_KEY"),
    model_info=model_info,
)

Observability

Getting Started

Microsoft AutoGen enables sophisticated multi-agent conversations and collaborative AI systems. Tracing AutoGen with Orq.ai provides deep insights into agent interactions, conversation flows, tool usage, and multi-agent coordination patterns to optimize your conversational AI applications.

Prerequisites

Before you begin, ensure you have:
  • An Orq.ai account and API Key
  • Python 3.8+
  • Microsoft AutoGen installed in your project
  • OpenAI API key (or other LLM provider credentials)

Install Dependencies

# Core AutoGen and OpenTelemetry packages
pip install pyautogen opentelemetry-sdk opentelemetry-exporter-otlp-proto-http

# LLM providers
pip install openai

Configure Orq.ai

Set up your environment variables to connect to Orq.ai’s OpenTelemetry collector: Unix/Linux/macOS:
export OTEL_EXPORTER_OTLP_ENDPOINT="https://api.orq.ai/v2/otel"
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer <ORQ_API_KEY>"
export OTEL_RESOURCE_ATTRIBUTES="service.name=autogen-app,service.version=1.0.0"
export OPENAI_API_KEY="<YOUR_OPENAI_API_KEY>"
Windows (PowerShell):
$env:OTEL_EXPORTER_OTLP_ENDPOINT = "https://api.orq.ai/v2/otel"
$env:OTEL_EXPORTER_OTLP_HEADERS = "Authorization=Bearer <ORQ_API_KEY>"
$env:OTEL_RESOURCE_ATTRIBUTES = "service.name=autogen-app,service.version=1.0.0"
$env:OPENAI_API_KEY = "<YOUR_OPENAI_API_KEY>"
Using .env file:
OTEL_EXPORTER_OTLP_ENDPOINT=https://api.orq.ai/v2/otel
OTEL_EXPORTER_OTLP_HEADERS=Authorization=Bearer <ORQ_API_KEY>
OTEL_RESOURCE_ATTRIBUTES=service.name=autogen-app,service.version=1.0.0
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>

Integration

AutoGen has built-in OpenTelemetry support. Configure the tracer provider and pass it to the AutoGen runtime.
Set up OpenTelemetry tracing in your application:
import os
import autogen
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor

# Configure tracer provider
tracer_provider = TracerProvider(
    resource=Resource({"service.name": "autogen-app"})
)

# Set up OTLP exporter
otlp_exporter = OTLPSpanExporter()

tracer_provider.add_span_processor(BatchSpanProcessor(otlp_exporter))
trace.set_tracer_provider(tracer_provider)

# Instrument OpenAI calls for automatic tracing
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)

config_list = [{"model": "gpt-4o", "api_key": os.getenv("OPENAI_API_KEY")}]

# Create agents
assistant = autogen.AssistantAgent(
    name="assistant",
    llm_config={"config_list": config_list, "temperature": 0}
)

user_proxy = autogen.UserProxyAgent(
    name="user_proxy",
    human_input_mode="NEVER",
    max_consecutive_auto_reply=3,
    code_execution_config={"work_dir": "coding", "use_docker": False}
)

print("Starting AutoGen conversation (this will be traced)...")

# Start conversation (automatically traced)
user_proxy.initiate_chat(
    assistant,
    message="Write a Python function to calculate fibonacci numbers up to n=10"
)
All AutoGen agent conversations and interactions will be instrumented and exported to Orq.ai through the OTLP exporter. For more details, see Traces.

Advanced Examples

Multi-Agent Group Chat
import autogen
import os
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor

# Configure tracer provider
tracer_provider = TracerProvider(
  resource=Resource({"service.name": "autogen-app"})
)

# Set up OTLP exporter
otlp_exporter = OTLPSpanExporter()

tracer_provider.add_span_processor(BatchSpanProcessor(otlp_exporter))
trace.set_tracer_provider(tracer_provider)

# Instrument OpenAI calls for automatic tracing
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)

# Setup done as shown in Integration section above

config_list = [{"model": "gpt-4", "api_key": os.getenv("OPENAI_API_KEY")}]

# Create specialized agents
coder = autogen.AssistantAgent(
    name="coder",
    system_message="You are an expert Python developer.",
    llm_config={"config_list": config_list}
)

reviewer = autogen.AssistantAgent(
    name="code_reviewer",
    system_message="You review code for quality and best practices.",
    llm_config={"config_list": config_list}
)

user_proxy = autogen.UserProxyAgent(
    name="user_proxy",
    human_input_mode="NEVER",
    max_consecutive_auto_reply=5,
    code_execution_config={"work_dir": "workspace"}
)

# Create group chat
groupchat = autogen.GroupChat(
    agents=[user_proxy, coder, reviewer],
    messages=[],
    max_round=10
)

manager = autogen.GroupChatManager(
    groupchat=groupchat,
    llm_config={"config_list": config_list}
)

# Start group conversation (automatically traced)
user_proxy.initiate_chat(
    manager,
    message="Create a REST API for user management with FastAPI"
)
Agent with Custom Tools
import autogen
import os
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor

# Configure tracer provider
tracer_provider = TracerProvider(
  resource=Resource({"service.name": "autogen-app"})
)

# Set up OTLP exporter
otlp_exporter = OTLPSpanExporter()

tracer_provider.add_span_processor(BatchSpanProcessor(otlp_exporter))
trace.set_tracer_provider(tracer_provider)

# Instrument OpenAI calls for automatic tracing
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)

config_list = [{"model": "gpt-4", "api_key": os.getenv("OPENAI_API_KEY")}]

# Define custom tools/functions
def get_weather(location: str) -> str:
    """Get current weather for a location."""
    return f"Weather in {location}: Sunny, 75°F"

def calculate_distance(city1: str, city2: str) -> str:
    """Calculate distance between two cities."""
    return f"Distance between {city1} and {city2}: 500 km"

# Create agent with function calling
travel_planner = autogen.AssistantAgent(
    name="travel_planner",
    system_message="You help plan travel itineraries.",
    llm_config={
        "config_list": config_list,
        "functions": [
            {
                "name": "get_weather",
                "description": "Get current weather for a location",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "location": {"type": "string"}
                    },
                    "required": ["location"]
                }
            },
            {
                "name": "calculate_distance",
                "description": "Calculate distance between cities",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "city1": {"type": "string"},
                        "city2": {"type": "string"}
                    },
                    "required": ["city1", "city2"]
                }
            }
        ]
    }
)

user_proxy = autogen.UserProxyAgent(
    name="user",
    human_input_mode="NEVER",
    max_consecutive_auto_reply=5,
    function_map={
        "get_weather": get_weather,
        "calculate_distance": calculate_distance
    },
    code_execution_config=False
)

# Use agent with tools (automatically traced)
user_proxy.initiate_chat(
    travel_planner,
    message="Plan a 3-day trip from New York to London"
)
Autogen is also usable through our AI Router, to learn more, see AutoGen Gateway.