AutoGen

Integrate Orq.ai with Microsoft AutoGen using OpenTelemetry

Getting Started

Microsoft AutoGen enables sophisticated multi-agent conversations and collaborative AI systems. Tracing AutoGen with Orq.ai provides deep insights into agent interactions, conversation flows, tool usage, and multi-agent coordination patterns to optimize your conversational AI applications.

Prerequisites

Before you begin, ensure you have:

  • An Orq.ai account and API Key
  • Python 3.8+
  • Microsoft AutoGen installed in your project
  • OpenAI API key (or other LLM provider credentials)

Install Dependencies

# Core AutoGen and OpenTelemetry packages
pip install pyautogen opentelemetry-sdk opentelemetry-exporter-otlp-proto-http

# LLM providers
pip install openai

Configure Orq.ai

Set up your environment variables to connect to Orq.ai's OpenTelemetry collector:

Unix/Linux/macOS:

export OTEL_EXPORTER_OTLP_ENDPOINT="https://api.orq.ai/v2/otel"
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer <ORQ_API_KEY>"
export OTEL_RESOURCE_ATTRIBUTES="service.name=autogen-app,service.version=1.0.0"
export OPENAI_API_KEY="<YOUR_OPENAI_API_KEY>"

Windows (PowerShell):

$env:OTEL_EXPORTER_OTLP_ENDPOINT = "https://api.orq.ai/v2/otel"
$env:OTEL_EXPORTER_OTLP_HEADERS = "Authorization=Bearer <ORQ_API_KEY>"
$env:OTEL_RESOURCE_ATTRIBUTES = "service.name=autogen-app,service.version=1.0.0"
$env:OPENAI_API_KEY = "<YOUR_OPENAI_API_KEY>"

Using .env file:

OTEL_EXPORTER_OTLP_ENDPOINT=https://api.orq.ai/v2/otel
OTEL_EXPORTER_OTLP_HEADERS=Authorization=Bearer <ORQ_API_KEY>
OTEL_RESOURCE_ATTRIBUTES=service.name=autogen-app,service.version=1.0.0
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>

Integration

AutoGen has built-in OpenTelemetry support. Configure the tracer provider and pass it to the AutoGen runtime.

Set up OpenTelemetry tracing in your application:

import os
import autogen
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor

# Configure tracer provider
tracer_provider = TracerProvider(
    resource=Resource({"service.name": "autogen-app"})
)

# Set up OTLP exporter
otlp_exporter = OTLPSpanExporter()

tracer_provider.add_span_processor(BatchSpanProcessor(otlp_exporter))
trace.set_tracer_provider(tracer_provider)

# Instrument OpenAI calls for automatic tracing
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)

config_list = [{"model": "gpt-4o", "api_key": os.getenv("OPENAI_API_KEY")}]

# Create agents
assistant = autogen.AssistantAgent(
    name="assistant",
    llm_config={"config_list": config_list, "temperature": 0}
)

user_proxy = autogen.UserProxyAgent(
    name="user_proxy",
    human_input_mode="NEVER",
    max_consecutive_auto_reply=3,
    code_execution_config={"work_dir": "coding", "use_docker": False}
)

print("Starting AutoGen conversation (this will be traced)...")

# Start conversation (automatically traced)
user_proxy.initiate_chat(
    assistant,
    message="Write a Python function to calculate fibonacci numbers up to n=10"
)
👍

All AutoGen agent conversations and interactions will be instrumented and exported to Orq.ai through the OTLP exporter. For more details, see Traces.

Advanced Examples

Multi-Agent Group Chat
import autogen
import os
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor

# Configure tracer provider
tracer_provider = TracerProvider(
  resource=Resource({"service.name": "autogen-app"})
)

# Set up OTLP exporter
otlp_exporter = OTLPSpanExporter()

tracer_provider.add_span_processor(BatchSpanProcessor(otlp_exporter))
trace.set_tracer_provider(tracer_provider)

# Instrument OpenAI calls for automatic tracing
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)

# Setup done as shown in Integration section above

config_list = [{"model": "gpt-4", "api_key": os.getenv("OPENAI_API_KEY")}]

# Create specialized agents
coder = autogen.AssistantAgent(
    name="coder",
    system_message="You are an expert Python developer.",
    llm_config={"config_list": config_list}
)

reviewer = autogen.AssistantAgent(
    name="code_reviewer",
    system_message="You review code for quality and best practices.",
    llm_config={"config_list": config_list}
)

user_proxy = autogen.UserProxyAgent(
    name="user_proxy",
    human_input_mode="NEVER",
    max_consecutive_auto_reply=5,
    code_execution_config={"work_dir": "workspace"}
)

# Create group chat
groupchat = autogen.GroupChat(
    agents=[user_proxy, coder, reviewer],
    messages=[],
    max_round=10
)

manager = autogen.GroupChatManager(
    groupchat=groupchat,
    llm_config={"config_list": config_list}
)

# Start group conversation (automatically traced)
user_proxy.initiate_chat(
    manager,
    message="Create a REST API for user management with FastAPI"
)
Agent with Custom Tools
import autogen
import os
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor

# Configure tracer provider
tracer_provider = TracerProvider(
  resource=Resource({"service.name": "autogen-app"})
)

# Set up OTLP exporter
otlp_exporter = OTLPSpanExporter()

tracer_provider.add_span_processor(BatchSpanProcessor(otlp_exporter))
trace.set_tracer_provider(tracer_provider)

# Instrument OpenAI calls for automatic tracing
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)

config_list = [{"model": "gpt-4", "api_key": os.getenv("OPENAI_API_KEY")}]

# Define custom tools/functions
def get_weather(location: str) -> str:
    """Get current weather for a location."""
    return f"Weather in {location}: Sunny, 75°F"

def calculate_distance(city1: str, city2: str) -> str:
    """Calculate distance between two cities."""
    return f"Distance between {city1} and {city2}: 500 km"

# Create agent with function calling
travel_planner = autogen.AssistantAgent(
    name="travel_planner",
    system_message="You help plan travel itineraries.",
    llm_config={
        "config_list": config_list,
        "functions": [
            {
                "name": "get_weather",
                "description": "Get current weather for a location",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "location": {"type": "string"}
                    },
                    "required": ["location"]
                }
            },
            {
                "name": "calculate_distance",
                "description": "Calculate distance between cities",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "city1": {"type": "string"},
                        "city2": {"type": "string"}
                    },
                    "required": ["city1", "city2"]
                }
            }
        ]
    }
)

user_proxy = autogen.UserProxyAgent(
    name="user",
    human_input_mode="NEVER",
    max_consecutive_auto_reply=5,
    function_map={
        "get_weather": get_weather,
        "calculate_distance": calculate_distance
    },
    code_execution_config=False
)

# Use agent with tools (automatically traced)
user_proxy.initiate_chat(
    travel_planner,
    message="Plan a 3-day trip from New York to London"
)
📘

Autogen is also usable through our AI Gateway, to learn more, see AutoGen Gateway.