Documentation Index Fetch the complete documentation index at: https://docs.orq.ai/llms.txt
Use this file to discover all available pages before exploring further.
AWS Bedrock AgentCore Runtime is a serverless hosting environment for deploying agents built with any framework (Strands, LangGraph, CrewAI) at production scale, without managing infrastructure. Connect the AI Router from inside AgentCore to access 300+ models across 20+ providers with automatic fallbacks, cost tracking, and full observability.
AI Router Route LLM calls through the AI Router inside AgentCore with a single base URL change.
Orq Agents Invoke any Orq.ai Agent by key from inside AgentCore using the OpenAI Responses API.
AI Router
Connect the AI Router to an AgentCore entrypoint to access 300+ LLMs across 20+ providers, with automatic fallbacks, cost tracking, and observability.
OpenAI Agents SDK
Install:
pip install openai-agents openai bedrock-agentcore
Configure an AsyncOpenAI client with the AI Router base URL, then wrap the agent in a BedrockAgentCoreApp entrypoint:
import os
from openai import AsyncOpenAI
from agents import Agent, Runner, set_default_openai_client, set_tracing_disabled
from bedrock_agentcore.runtime import BedrockAgentCoreApp
client = AsyncOpenAI(
api_key = os.getenv( "ORQ_API_KEY" ),
base_url = "https://api.orq.ai/v3/router"
)
set_tracing_disabled( True )
set_default_openai_client(client)
agent = Agent(
name = "Assistant" ,
instructions = "You are a helpful assistant." ,
model = "openai/gpt-4o"
)
app = BedrockAgentCoreApp()
@app.entrypoint
async def agent_invocation (payload, context):
query = payload.get( "prompt" , "How can I help you?" )
result = await Runner.run(agent, query)
return { "result" : result.final_output}
app.run()
Strands Agents
Install:
pip install strands-agents bedrock-agentcore
Use OpenAIModel with the AI Router base URL inside a Strands agent:
import os
from strands import Agent
from strands.models.openai import OpenAIModel
from bedrock_agentcore.runtime import BedrockAgentCoreApp
model = OpenAIModel(
model_id = "openai/gpt-4o" ,
client_args = {
"api_key" : os.getenv( "ORQ_API_KEY" ),
"base_url" : "https://api.orq.ai/v3/router"
}
)
agent = Agent(
model = model,
system_prompt = "You are a helpful assistant."
)
app = BedrockAgentCoreApp()
@app.entrypoint
def agent_invocation (payload, context):
result = agent(payload.get( "prompt" , "How can I help?" ))
return { "result" : str (result)}
app.run()
Orq Agents
Invoke any Orq.ai Agent by key using model="agent/YOUR_AGENT_KEY" with the OpenAI Responses API. The AI Router executes the configured agent including its system prompt, tools, evaluators, and model settings.
Find the agent key on the Agents page in Orq.ai .
import os
from openai import AsyncOpenAI
from bedrock_agentcore.runtime import BedrockAgentCoreApp
client = AsyncOpenAI(
api_key = os.getenv( "ORQ_API_KEY" ),
base_url = "https://api.orq.ai/v3/router"
)
app = BedrockAgentCoreApp()
@app.entrypoint
async def agent_invocation (payload, context):
query = payload.get( "prompt" , "How can I help you?" )
response = await client.responses.create(
model = "agent/YOUR_AGENT_KEY" ,
input = query
)
return { "result" : response.output_text}
app.run()
Observability
Capture traces from AgentCore runs and send them to Orq.ai using OpenTelemetry.
Installation
pip install openai bedrock-agentcore opentelemetry-sdk opentelemetry-exporter-otlp-proto-http openinference-instrumentation-openai
Configuration
Set up the OTLP exporter and instrument the OpenAI client before starting the app:
import os
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from openinference.instrumentation.openai import OpenAIInstrumentor
from openai import AsyncOpenAI
from bedrock_agentcore.runtime import BedrockAgentCoreApp
exporter = OTLPSpanExporter(
endpoint = "https://api.orq.ai/v2/otel/v1/traces" ,
headers = { "Authorization" : f "Bearer { os.getenv( 'ORQ_API_KEY' ) } " }
)
provider = TracerProvider()
provider.add_span_processor(BatchSpanProcessor(exporter))
OpenAIInstrumentor().instrument( tracer_provider = provider)
client = AsyncOpenAI(
api_key = os.getenv( "OPENAI_API_KEY" )
)
app = BedrockAgentCoreApp()
@app.entrypoint
async def agent_invocation (payload, context):
query = payload.get( "prompt" , "How can I help you?" )
response = await client.responses.create(
model = "gpt-4o" ,
input = query
)
return { "result" : response.output_text}
app.run()
View traces in AI Studio under the Traces tab.
Evaluations & Experiments
Once agents are running, use Evaluatorq to score outputs across a dataset and Experiments to compare configurations side-by-side.
Run Evaluations with Evaluatorq Run parallel evaluations across agents and compare results.
Run Experiments via the API Compare agent configurations and view results in the AI Studio.