Orq MCP is live: Use natural language to interrogate traces, spot regressions, and experiment your way to optimal AI configurations. Available in Claude Desktop, Claude Code, Cursor, and more. Start now →
Connect CrewAI to Orq.ai’s AI Router for complete observability, built-in reliability, and access to 300+ LLMs across 20+ providers.
AI Router
Route your LLM calls through the AI Router with a single base URL change. Zero vendor lock-in: always run on the best model at the lowest cost for your use case.
Observability
Instrument your code with OpenTelemetry to capture traces, logs, and metrics for every LLM call, agent step, and tool use.
CrewAI is a framework for orchestrating multi-agent teams with role-based agents, hierarchical task management, and collaborative AI workflows. By connecting CrewAI to Orq.ai’s AI Router, you get access to 300+ models for your agent crews with a single configuration change.
Orchestrate multiple agents with specialized roles:
Python
Copy
Ask AI
from crewai import Agent, Task, Crew, LLMimport osllm = LLM( model="openai/gpt-4o", api_key=os.getenv("ORQ_API_KEY"), base_url="https://api.orq.ai/v2/router",)researcher = Agent( role="Research Analyst", goal="Research topics and gather key facts", backstory="Expert at finding and summarizing information.", llm=llm,)writer = Agent( role="Content Writer", goal="Write clear, engaging content", backstory="Skilled at turning research into readable content.", llm=llm,)research_task = Task( description="Research the key benefits of renewable energy in 3 bullet points.", agent=researcher, expected_output="3 bullet points about renewable energy benefits.",)write_task = Task( description="Write a one-paragraph summary based on the research.", agent=writer, expected_output="A single paragraph summarizing renewable energy benefits.", context=[research_task],)crew = Crew(agents=[researcher, writer], tasks=[research_task, write_task], tracing=False)result = crew.kickoff()print(result)
CrewAI enables powerful multi-agent coordination for complex AI workflows. Tracing CrewAI with Orq.ai provides comprehensive insights into agent interactions, task execution, tool usage, and crew performance to optimize your multi-agent systems.
We’ll be using OpenInference as TracerProvider with CrewAI
Copy
Ask AI
from openinference.instrumentation.crewai import CrewAIInstrumentorfrom openinference.instrumentation.openai import OpenAIInstrumentorfrom opentelemetry import tracefrom opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporterfrom opentelemetry.sdk import trace as trace_sdkfrom opentelemetry.sdk.trace.export import BatchSpanProcessorfrom crewai import Agent, Task, Crew# Initialize OpenTelemetrytracer_provider = trace_sdk.TracerProvider()tracer_provider.add_span_processor(BatchSpanProcessor(OTLPSpanExporter( endpoint="https://api.orq.ai/v2/otel/v1/traces", headers={"Authorization": "Bearer <ORQ_API_KEY>"})))trace.set_tracer_provider(tracer_provider)# Instrument CrewAICrewAIInstrumentor().instrument(tracer_provider=tracer_provider)OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)# Your CrewAI code is automatically tracedresearcher = Agent( role='Market Research Analyst', goal='Gather comprehensive market data and trends', backstory='Expert in analyzing market dynamics and consumer behavior')task = Task( description='Research the latest trends in AI and machine learning', agent=researcher, expected_output='Comprehensive report on AI and ML trens with key insights and recommendations')crew = Crew(agents=[researcher], tasks=[task], tracing=False)result = crew.kickoff()