Skip to main content

Overview

The Agents Beta API provides powerful endpoints for creating, executing, and managing AI agents with support for tools, memory, knowledge bases, and real-time streaming.

Prerequisites

  • Set up a new Project, if you want to follow along you can name it agents. This will be also used as a path to create your resources.
Setting up a project in Orq.ai
  • Ensure you have an API Key ready to use.
  • Optionally install one of our SDKs.

Executing an Agent

Running an agent involves a two-step process:
  1. Create an Agent - Define the agent’s configuration (role, instructions, model, tools, etc.)
  2. Execute the Agent - Send a message to the agent using the Responses API
The Agents payloads are built on the A2A protocol, standardizing agent to agent communication, to learn more, see The A2A Protocol Website

Step 1: Create an Agent

curl -X POST https://api.orq.ai/v2/agents \
  -H "Authorization: Bearer $ORQ_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "key": "simple-agent-1",
  "role": "Assistant",
  "description": "A helpful assistant for general tasks",
  "instructions": "Be helpful and concise",
  "path": "Default/agents",
  "model": {
    "id": "openai/gpt-4o"
  },
  "settings": {
    "max_iterations": 3,
    "max_execution_time": 300,
    "tools": []
  }
}'

Step 2: Execute the Agent

curl -X POST https://api.orq.ai/v2/agents/simple-agent-1/responses \
  -H "Authorization: Bearer $ORQ_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "agent_key": "simple-agent-1",
  "background": false,
  "message": {
    "role": "user",
    "parts": [
      {
        "kind": "text",
        "text": "I'\''m preparing a technical presentation on microservices architecture. Can you help me create an outline covering the key benefits, challenges, and best practices?"
      }
    ]
  }
}'
The response will be sent back with a task detail containing the status and metadata, including a unique id.
Set background: false (default) to wait for the agent execution to complete. Set background: true to return immediately without waiting for completion. Use the task id in subsequent calls to continue the conversation with the agent.

Model Parameter Format

The model parameter supports two formats: Object format (recommended) - Allows you to specify model parameters like temperature:
"model": {
  "id": "openai/gpt-4o",
  "parameters": {
    "temperature": 0.5
  }
}
String format - For simple use cases without custom parameters:
"model": "openai/gpt-4o"
For a complete list of supported model parameters, see the API Reference.
{
  "id": "01K6D8QESESZ6SAXQPJPFQXPFT",
  "contextId": "0ae12412-cdea-408e-a165-9ff8086db400",
  "kind": "task",
  "status": {
    "state": "submitted",
    "timestamp": "2025-09-30T12:14:32.805Z"
  },
  "metadata": {
    "orq_workspace_id": "0ae12412-cdea-408e-a165-9ff8086db400",
    "orq_agent_id": "01K6D8QET4CR7DSDV07Y5WMDDG",
    "orq_agent_key": "simple-agent-1",
    "orq_created_by_id": null
  }
}

Continuing a Task

After receiving a task response, you can continue the conversation by sending additional messages to the same agent. This allows for multi-turn interactions where the agent maintains context from previous exchanges. To continue a task, use the /v2/agents/{key}/responses endpoint and provide the task_id from the previous response. The task must be in an inactive state to continue.
task_id: Optional task ID to continue an existing agent execution. When provided, the agent will continue the conversation from the existing task state.
curl -X POST https://api.orq.ai/v2/agents/simple-agent-1/responses \
  -H "Authorization: Bearer $ORQ_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "agent_key": "simple-agent-1",
  "task_id": "01K6D8QESESZ6SAXQPJPFQXPFT",
  "background": false,
  "message": {
    "role": "user",
    "parts": [
      {
        "kind": "text",
        "text": "Great outline! Can you expand on the challenges section? I want to dive deeper into data consistency approaches and monitoring best practices."
      }
    ]
  }
}'
The continuation will return a new task ID representing the extended conversation:
{
  "id": "01K6D9XQZV34PDPMYX6SHEMY5A",
  "contextId": "0ae12412-cdea-408e-a165-9ff8086db400",
  "kind": "task",
  "status": {
    "state": "submitted",
    "timestamp": "2025-09-30T12:18:45.123Z"
  },
  "metadata": {
    "orq_workspace_id": "0ae12412-cdea-408e-a165-9ff8086db400",
    "orq_agent_id": "01K6D8QET4CR7DSDV07Y5WMDDG",
    "orq_agent_key": "simple-agent-1",
    "orq_created_by_id": null
  }
}
Note: The agent maintains full context from the previous conversation.

Agent and Tasks States

Agents run through the following states when processing tasks:
StateDescription
ActiveExecution in progress, continuation requests blocked
InactiveWaiting for user input or tool results, ready for continuation
ErrorExecution failed, continuation blocked
Approval RequiredTool execution requires manual approval (coming soon)
Tasks go through the following states:
StateDescription
SubmittedTask created and queued for execution
WorkingAgent actively processing
Input RequiredWaiting for user input or tool results
CompletedTask finished successfully
FailedTask encountered an error
CanceledTask was manually canceled

Attaching Files

Multiple types of files can be attached when calling agents with the Responses API.
  • Images: Via URL (uri) or base64 encoding (bytes)
  • PDFs: Only supported via base64 encoding (bytes) - URI links are not supported for PDFs
  • MIME Types: Required - Must specify correct mimeType (e.g., image/jpeg, application/pdf)
Always verify that your chosen model supports the file types you’re using. Vision models are typically required for image processing and PDF processing

Converting Files to Base64

Before attaching PDF files, you need to convert them to base64. Here’s how to do it:
import base64

def file_to_base64(file_path: str) -> str:
    """Convert a file to base64 string"""
    with open(file_path, "rb") as file:
        return base64.b64encode(file.read()).decode("utf-8")

# Usage
pdf_base64 = file_to_base64("path/to/your/document.pdf")
print(f"Base64 encoded PDF: {pdf_base64}")

Examples

Attaching an Image

Step 1: Create an Agent
curl -X POST https://api.orq.ai/v2/agents \
  -H "Authorization: Bearer $ORQ_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "key": "image-classifier",
  "role": "Image Analyst",
  "description": "Analyzes images and identifies visual content",
  "instructions": "Analyze images and describe what you can see in detail",
  "path": "Default/agents",
  "model": {
    "id": "openai/gpt-4o"
  },
  "settings": {
    "max_iterations": 5,
    "max_execution_time": 600,
    "tools": [
      {
        "type": "current_date"
      }
    ]
  }
}'
Step 2: Call the Agent with an Image
curl -X POST https://api.orq.ai/v2/agents/image-classifier/responses \
  -H "Authorization: Bearer $ORQ_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "agent_key": "image-classifier",
  "message": {
    "role": "user",
    "parts": [
      {
        "kind": "text",
        "text": "Look at this map image and tell me what cities, regions, and geographical features you can identify."
      },
      {
        "kind": "file",
        "file": {
          "uri": "https://upload.wikimedia.org/wikipedia/commons/7/73/Herman_Moll_A_New_Map_of_Europe_According_to_the_Newest_Observations_1721.JPG",
          "mimeType": "image/jpeg",
          "name": "europe-map.jpg"
        }
      }
    ]
  }
}'

Attaching a PDF

Step 1: Create an Agent
curl -X POST https://api.orq.ai/v2/agents \
  -H "Authorization: Bearer $ORQ_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "key": "pdf-analyzer",
  "role": "Document Analyst",
  "description": "Analyzes PDF documents and extracts information",
  "instructions": "Analyze the provided PDF document and answer questions about its content",
  "path": "Default/agents",
  "model": {
    "id": "openai/gpt-4o"
  },
  "settings": {
    "max_iterations": 5,
    "max_execution_time": 600,
    "tools": [
      {
        "type": "current_date"
      }
    ]
  }
}'
Step 2: Call the Agent with a PDF
curl -X POST https://api.orq.ai/v2/agents/pdf-analyzer/responses \
  -H "Authorization: Bearer $ORQ_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "agent_key": "pdf-analyzer",
  "message": {
    "role": "user",
    "parts": [
      {
        "kind": "text",
        "text": "Please analyze this PDF document and summarize its key points"
      },
      {
        "kind": "file",
        "file": {
          "bytes": "JVBERi0xLjQKJeLjz9MKMyAwIG9iago8PC9MZW5ndGggMTE0L0ZpbHRlci9GbGF0ZURlY29kZT4+CnN0cmVhbQp4nD2OywoCMQxF3/mKu3YRk7ZJk3YpqCOIAzKCH2DSVgp2BtvR/3emDt7lOZyTl0CgVo0KBFhKZRWYClbwBHtwBg7gCM7AX8E3eMGHkMbmdjVdzLfx/XG13Kzn80U8G4+Sny9JyCmJMI25EFFGUs8P/SBJY7+XZElIo36c+72kl6ZJPOglWRgNe8Mw6oZJHqU0HSQvv9vn+wdDDxLsCmVuZHN0cmVhbQplbmRvYmoKMSAwIG9iago8PC9UeXBlL1BhZ2UvTWVkaWFCb3hbMCAwIDYxMiA3OTJdL1Jlc291cmNlczw8L0ZvbnQ8PC9GMSAyIDAgUj4+Pj4vQ29udGVudHMgMyAwIFIvUGFyZW50IDQgMCBSPj4KZW5kb2JqCjIgMCBvYmoKPDwvVHlwZS9Gb250L1N1YnR5cGUvVHlwZTEvQmFzZUZvbnQvSGVsdmV0aWNhPj4KZW5kb2JqCjQgMCBvYmoKPDwvVHlwZS9QYWdlcy9Db3VudCAxL0tpZHNbMSAwIFJdPj4KZW5kb2JqCjUgMCBvYmoKPDwvVHlwZS9DYXRhbG9nL1BhZ2VzIDQgMCBSPj4KZW5kb2JqCjYgMCBvYmoKPDwvUHJvZHVjZXIoU2FtcGxlIFBERikvQ3JlYXRpb25EYXRlKEQ6MjAyNDAxMDEwMDAwMDApPj4KZW5kb2JqCnhyZWYKMCA3CjAwMDAwMDAwMDAgNjU1MzUgZiAKMDAwMDAwMDE5NyAwMDAwMCBuIAowMDAwMDAwMzA0IDAwMDAwIG4gCjAwMDAwMDAwMTUgMDAwMDAgbiAKMDAwMDAwMDM4NSAwMDAwMCBuIAowMDAwMDAwNDQwIDAwMDAwIG4gCjAwMDAwMDA0ODcgMDAwMDAgbiAKdHJhaWxlcgo8PC9TaXplIDcvUm9vdCA1IDAgUi9JbmZvIDYgMCBSPj4Kc3RhcnR4cmVmCjU3MQolJUVPRko=",
          "mimeType": "application/pdf",
          "name": "sample-document.pdf"
        }
      }
    ]
  }
}'
To learn more about the Create Response API and how to use it, see the API reference.

Knowledge Bases

Knowledge Base integration with Agents enables AI systems to access and query your custom data sources during agent execution. This powerful combination allows agents to ground their responses in your specific domain knowledge, documents, and datasets.

Key Features

  • Dynamic Knowledge Discovery: Agents can discover available knowledge bases using retrieve_knowledge_bases
  • Contextual Querying: Query specific knowledge bases with query_knowledge_base based on user queries
  • Automatic Grounding: Responses are grounded in your custom data sources
  • Multiple Knowledge Sources: Connect agents to multiple knowledge bases simultaneously

Creating an Agent with Knowledge Bases

Step 1: Create an Agent

curl -X POST https://api.orq.ai/v2/agents \
  -H "Authorization: Bearer $ORQ_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "key": "book-assistant",
  "role": "Book Assistant",
  "description": "A friendly agent who is connected to knowledge bases and helps people",
  "instructions": "Help the user with their questions please. First use retrieve_knowledge_bases to see what knowledge sources are available, then query_knowledge_base to find relevant information.",
  "path": "Default/agents",
  "model": {
    "id": "anthropic/claude-opus-4-1-20250805"
  },
  "settings": {
    "max_iterations": 5,
    "max_execution_time": 600,
    "tools": [
      {
        "type": "current_date"
      },
      {
        "type": "retrieve_knowledge_bases"
      },
      {
        "type": "query_knowledge_base"
      }
    ]
  },
  "knowledge_bases": [
    {
      "knowledge_id": "my_knowledge_base"
    }
  ]
}'

Step 2: Execute the Agent

curl -X POST https://api.orq.ai/v2/agents/book-assistant/responses \
  -H "Authorization: Bearer $ORQ_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "agent_key": "book-assistant",
  "background": false,
  "message": {
    "role": "user",
    "parts": [
      {
        "kind": "text",
        "text": "What is the coat of arms of the Apafi family?"
      }
    ]
  }
}'
Agents should use retrieve_knowledge_bases to discover available knowledge bases before querying them. Guide your agent with instructions like: First use retrieve_knowledge_bases to see what knowledge sources are available, then query_knowledge_base to find relevant information.
To learn more about Knowledge Bases, see Knowledge Bases.

Memory Management

Memory Management enables agents to maintain persistent context and learn from previous interactions across conversations. By integrating with Memory Stores, agents can store, retrieve, and manage information that persists beyond individual sessions.

Key Features

  • Persistent Memory: Store information that persists across conversations and sessions
  • Entity-Based Isolation: Use entity IDs to separate memories by user, organization, or session
  • Dynamic Memory Discovery: Discover available memory stores using retrieve_memory_stores
  • Memory Operations: Query, write, and delete memory documents programmatically
  • Contextual Recall: Automatically retrieve relevant memories based on conversation context
Memory stores don’t automatically save all information from conversations.
  • What gets saved depends on the memory store’s description field and how you prompt the agent.
  • For best results, explicitly tell the agent what to save (e.g., “Remember that my preferred language is Python”).
  • Without explicit save instructions, the system may only retain some information while missing other details.
  • Make sure your memory store description clearly states what type of data it should store.

Examples

Creating Memory Store

Create a new memory store with a unique key and embedding model configuration.
curl --request POST \
     --url https://api.orq.ai/v2/memory-stores \
     --header 'accept: application/json' \
     --header 'authorization: Bearer <ORQ_API_KEY>' \
     --header 'content-type: application/json' \
     --data '
{
  "key": "customer_information",
  "description": "Store for customer interaction history and preferences",
  "path": "Default/agents",
  "embedding_config": {
    "model": "openai/text-embedding-3-small"
  }
}'

Creating Memory

Create a new memory for a specific entity within a memory store.
curl --request POST \
     --url https://api.orq.ai/v2/memory-stores/customer_information/memories \
     --header 'accept: application/json' \
     --header 'authorization: Bearer <ORQ_API_KEY>' \
     --header 'content-type: application/json' \
     --data '
{
  "entity_id": "customer_456",
  "metadata": {
    "type": "customer",
    "segment": "premium",
    "region": "north_america",
    "status": "active"
  }
}'
The ID of the entity will be returned after created to be used when calling the agent

Creating an Agent

curl -X POST https://api.orq.ai/v2/agents \
  -H "Authorization: Bearer $ORQ_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "key": "agent-memories",
  "role": "Personalized Assistant",
  "description": "Agent using external ID for memory store access",
  "instructions": "You have access to user-specific memories. Remember information about the user and recall it when asked.",
  "system_prompt": "Please answer with a lot of emojis for all questions. Use retrieve_memory_stores to see what memory stores are available, then use query_memory_store to search for relevant information before responding",
  "settings": {
    "max_iterations": 5,
    "max_execution_time": 300,
    "tools": [
      {
        "type": "current_date"
      },
      {
        "type": "retrieve_memory_stores"
      },
      {
        "type": "query_memory_store"
      },
      {
        "type": "write_memory_store"
      },
      {
        "type": "delete_memory_document"
      }
    ]
  },
  "model": "openai/gpt-4o",
  "path": "Default/agents",
  "memory_stores": [
    "customer_information"
  ]
}'
Agents must use the retrieve_memory_stores tool first to discover available memory stores before they can query or write to them. Include instructions in your system prompt like: Use retrieve_memory_stores to see what memory stores are available, then use query_memory_store to search for relevant information before responding.

Calling the agent

To call the Agent, we’ll use the Responses API with an Embedded message and Linked memory.
curl -X POST https://api.orq.ai/v2/agents/agent-memories/responses \
  -H "Authorization: Bearer $ORQ_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "key": "agent-memories",
  "memory": {
    "entity_id": "customer_456"
  },
  "message": {
    "role": "user",
    "parts": [
      {
        "kind": "text",
        "text": "Do you remember what is my name?"
      }
    ]
  }
}'
You can use multiple memory stores per call, ensure that the entity_id sent during the calls maps the same way to all previously declared memory stores during agent creation.

Viewing Traces

Within Traces, you can visualize the memory store usage and retrieval from the Agent, confirming that it was correctly created and used.
Memory Store Trace

Using Tools

Tools extend agent capabilities by providing access to external systems, APIs, and custom functionality. Agents can leverage multiple tool types to handle complex tasks requiring data processing, web interactions, or custom business logic. To declare a tool during an agent run, use the settings.tools array to import a Tool.

Standard Tools

Agents come pre-packaged with standard tools usable during generation.
ToolNameDescription
Current Datecurrent_dateProvides the Current Date to the Model
Google Searchgoogle_searchLets an Agent perform a Google Search
Web Scraperweb_scraperLets an Agent Scrape a Web Page
{
 "settings": {
        "tools": [
            {
                "type": "current_date"
            },
            {
                "type": "google_search"
            },
            {
                "type": "web_scraper"
            }
        ]
  }
}

Function Tools

Custom business logic with OpenAPI-style schemas Function tools allow you to define custom functions that agents can call. First, create an agent with function tools configured, then invoke it using the responses endpoint.

Step 1: Create Agent with Function Tools

curl -X POST https://api.orq.ai/v2/agents \
  -H "Authorization: Bearer $ORQ_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "key": "journey-plan-agent",
  "role": "Itinerary Assistant",
  "description": "An agent that helps plan trips and find local events",
  "instructions": "Help users plan their journey. Call the current_date tool first to know the current date, then use get_local_events to find events.",
  "settings": {
    "max_iterations": 5,
    "max_execution_time": 300,
    "tools": [
      {
        "type": "current_date"
      },
      {
        "type": "function",
        "key": "get_local_events",
        "display_name": "Get Local Events",
        "description": "Retrieves local events for a given city and date",
        "function": {
          "name": "get_local_events",
          "parameters": {
            "type": "object",
            "properties": {
              "city": {
                "type": "string",
                "description": "The name of the city"
              },
              "date": {
                "type": "string",
                "description": "The date (YYYY-MM-DD)"
              }
            },
            "required": ["city", "date"]
          }
        }
      }
    ]
  },
  "model": "openai/gpt-4o",
  "path": "Default/agents"
}'

Step 2: Invoke the Agent

curl -X POST https://api.orq.ai/v2/agents/journey-plan-agent/responses \
  -H "Authorization: Bearer $ORQ_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "message": {
    "role": "user",
    "parts": [
      {
        "kind": "text",
        "text": "What is happening in Budapest on the 26th and 27th of August this year?"
      }
    ]
  }
}'
For the above method, using the Stream Run execution, we’re getting the following payloads showing the agent’s reasoning process: **Iteration 1: Agent Calls Current Date Tool What’s happening: The agent analyzes the user’s question and decides it needs to know the current date first (following system instructions). It calls the orq_current_date tool.
{
  "type": "event.agents.thought",
  "data": {
    "message_difference": {
      "2": {
        "role": "agent",
        "parts": [
          {
            "kind": "tool_call",
            "tool_name": "orq_current_date",
            "tool_call_id": "call_VHTcKDA3nWxQUYOnqPHedURK",
            "arguments": {"timezone": "UTC"}
          }
        ]
      }
    },
    "iteration": 1
  }
}
**Tool Result: Date Retrieved What’s happening: The orq_current_date tool returns 2025-10-03, so the agent now knows the user is asking about past events (August 26-27, 2025 has already occurred).
{
  "type": "event.workflow_events.tool_execution_finished",
  "data": {
    "result": {
      "currentDate": "2025-10-03T04:22:50.867Z"
    }
  }
}
**Iteration 2: Agent Calls Function Tools What’s happening: With date context established, the agent decides to make TWO parallel calls to the get_local_events function tool (one for each date requested).
{
  "type": "event.agents.thought",
  "data": {
    "message_difference": {
      "4": {
        "role": "agent",
        "parts": [
          {
            "kind": "tool_call",
            "tool_name": "get_local_events",
            "arguments": {"city": "Budapest", "date": "2025-08-26"}
          },
          {
            "kind": "tool_call",
            "tool_name": "get_local_events",
            "arguments": {"city": "Budapest", "date": "2025-08-27"}
          }
        ]
      }
    },
    "iteration": 2,
    "accumulated_execution_time": 920.9
  }
}
**Agent Paused: Awaiting Function Implementation What’s happening: Since get_local_events is a function tool (not a built-in tool), the agent pauses and waits for you to implement the function and provide results via the continuation API.
{
  "type": "event.agents.inactive",
  "data": {
    "finish_reason": "function_call",
    "pending_tool_calls": [
      {
        "id": "call_pvC8CFzY8tjw8epqPkgHFMY4",
        "function": {
          "name": "get_local_events",
          "arguments": "{\"city\": \"Budapest\", \"date\": \"2025-08-26\"}"
        }
      },
      {
        "id": "call_QsVc8ZPMfJXHV3socDnYSC1a",
        "function": {
          "name": "get_local_events",
          "arguments": "{\"city\": \"Budapest\", \"date\": \"2025-08-27\"}"
        }
      }
    ]
  }
}
Note: You must implement the function and submit results via the continuation API. The agent will then resume and synthesize a final answer.

Code Tools (Python)

Embed code tools with executable python code, for this example we’ll be embedding a password generator.
  import random
  import string

  def generate_password(length, include_numbers, include_symbols):
      """Generate a secure random password."""
      # Start with letters
      chars = string.ascii_letters

      # Add numbers if requested
      if include_numbers:
          chars += string.digits

      # Add symbols if requested
      if include_symbols:
          chars += '!@#$%^&*()_+-=[]{}|;:,.<>?'

      # Generate password
      password = ''.join(random.choice(chars) for _ in range(length))

      return {
          "password": password,
          "length": len(password),
          "includes_numbers": include_numbers,
          "includes_symbols": include_symbols,
          "character_types": len(chars)
      }

  # Execute with params
  result = generate_password(
      params.get('length', 12),
      params.get('include_numbers', True),
      params.get('include_symbols', True)
  )
Tip: Converting Python Code to Payload StringTo convert your Python code into a JSON-safe string for the payload, use this shell command:Bash
cat your_script.py | jq -Rs .
Once the payload string is generated, it can be embedded within the code_tool section as follows:
The tool is then usable as any other tool. Callable by its key (here password_generator. The tool has access to the declared parameters as input for execution.
curl -X POST https://api.orq.ai/v2/agents/password-helper-agent/responses \
  -H "Authorization: Bearer $ORQ_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "message": {
    "role": "user",
    "parts": [
      {
        "kind": "text",
        "text": "Generate a 16-character password with numbers and symbols for my new database account"
      }
    ]
  }
}'

HTTP Tools

External API integrations supporting:
  • Blueprint: URL templates with parameter substitution using {{parameter}} syntax
  • Method: HTTP methods (GET, POST, PUT, DELETE, etc.)
  • Headers: Custom headers including authentication
  • Arguments: Parameter definitions with types and descriptions
  • send_to_model: Controls whether parameter values are visible to the LLM
    • default_value: Default values for parameters not provided by the LLM

Example

curl -X POST https://api.orq.ai/v2/agents/weather-analyst-1/responses \
  -H "Authorization: Bearer $ORQ_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "message": {
    "role": "user",
    "parts": [
      {
        "kind": "text",
        "text": "What is the current weather in Paris, France?"
      }
    ]
  }
}'

Multi-Agents

Multi-agent workflows enable complex task orchestration by creating teams of specialized agents that work together hierarchically. This system allows you to build sophisticated AI applications where different agents handle specific domains or functions, coordinated by an orchestrator agent. Agents can invoke sub-agents using a hierarchical agent system:
  • Orchestrator: Main agent that delegates tasks.
  • Sub-agents: Specialized agents for specific functions.
  • Delegation: Automatic task routing based on agent capabilities.
  • Context Sharing: Shared memory and knowledge between agents.

Prerequisites

Before using sub-agents, you must:
  1. Create Sub-agents First: Use the Agent CRUD endpoints (POST /v2/agents) to create the specialized agents
  2. Reference by key: Add the created agent keys to the team_of_agents array in your orchestrator agent configuration
  3. Include Required Tools: Add retrieve_agents and call_sub_agent tools to your orchestrator’s configuration to enable sub-agent discovery and delegation
  4. Define Roles: Specify each sub-agent’s role to help the orchestrator decide when to delegate.

Example

Creating Agents

We are creating two sub-agents that we will be later orchestrating through an agent run.
Calling the Create Agent endpoint to create a Youth Agent
curl -X POST https://api.orq.ai/v2/agents \
  -H "Authorization: Bearer $ORQ_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "key": "youth-agent",
  "role": "Agent specialized in talking to the younger generation until 25 years old",
  "description": "An agent which is tailored to a youthful tone",
  "instructions": "You are an agent who is tasked with talking to people under 25 years old and tries to match the tone that the younger generation uses in an informal, but respectful way. You use hip lingo actually and a bunch of emojis that helps they relate to you.",
  "settings": {
    "max_iterations": 15,
    "max_execution_time": 300,
    "tools": [
      {
        "type": "current_date"
      }
    ]
  },
  "model": "openai/gpt-4o",
  "path": "Default/agents"
}'
Calling the Create Agent endpoint to create a Formal Agent
curl -X POST https://api.orq.ai/v2/agents \
  -H "Authorization: Bearer $ORQ_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "key": "formal-agent",
  "role": "Agent specialized in talking to a more mature audience",
  "description": "An agent which is tailored to formal communication",
  "instructions": "You are a professional agent responsible for engaging with individuals under 25 years of age. Your communication should maintain a formal and respectful tone while remaining accessible to younger audiences.",
  "settings": {
    "max_iterations": 15,
    "max_execution_time": 300,
    "tools": [
      {
        "type": "current_date"
      }
    ]
  },
  "model": "openai/gpt-4o",
  "path": "Default/agents"
}'
To update an existing agent, issue a similar call using the PATCH method to an existing agent_key. e.g. PATCH /v2/agents/youth-agent . To learn more, see Updating an Agent.

Orchestrating Agents

  • Orchestrator agents must use the retrieve_agents tool to discover available sub-agents before delegating tasks.
  • Include instructions like: “Use retrieve_agents to see what specialized agents are available, then use call_sub_agent to delegate appropriate tasks based on their capabilities.”
  • Both tools must be included in the orchestrator’s configuration.

Step 1: Create the Orchestrator Agent

Now that you have created the sub-agents, create the orchestrator agent using the CRUD endpoint. Reference the sub-agents through the team_of_agents field by their keys and roles. You can later update this orchestrator (and its sub-agents) using the PATCH endpoint.
Calling the Create Agent endpoint to create an Orchestrator Agent
curl -X POST https://api.orq.ai/v2/agents \
  -H "Authorization: Bearer $ORQ_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "key": "tone-orchestrator",
  "role": "Book Assistant",
  "description": "A friendly orchestrator agent who strives to help people by coordinating specialized sub-agents",
  "instructions": "Answer the question of the user using your sub agents. Delegate tasks to your sub agents who will write the message and you will provide it in the final response with NO CHANGES. You create the answer and then you need to call the sub-agents after retrieving them and they will need to rephrase it according to their instructions. Instruct the sub-agents what they need to do, make their task clear as the orchestrator. Provide clear instructions to the sub agents what they need to do.",
  "settings": {
    "max_iterations": 15,
    "max_execution_time": 600,
    "tool_approval_required": "none",
    "tools": [
      {
        "type": "current_date"
      },
      {
        "type": "retrieve_agents"
      },
      {
        "type": "call_sub_agent"
      }
    ]
  },
  "model": "openai/gpt-4o",
  "path": "Default/agents",
  "team_of_agents": [
    {
      "key": "youth-agent",
      "role": "The youth agent for ages under 25"
    },
    {
      "key": "formal-agent",
      "role": "The formal agent for ages above 25"
    }
  ]
}'
You can update the orchestrator agent and its sub-agents at any time using the PATCH endpoint (e.g., PATCH /v2/agents/tone-orchestrator). This allows you to refine instructions, add or remove sub-agents from the team_of_agents array, or modify settings while keeping the same agent key.

Step 2: Run the Orchestrator Agent

Once the orchestrator agent is created, invoke it by referencing its key. The orchestrator will use the sub-agents defined in the team_of_agents array during the creation step.
Calling the Create Response endpoint with the Orchestrator Agent key
curl -X POST https://api.orq.ai/v2/agents/tone-orchestrator/responses \
  -H "Authorization: Bearer $ORQ_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "message": {
    "role": "user",
    "parts": [
      {
        "kind": "text",
        "text": "I am writing a book for young adults about financial literacy. Can you help me come up with an engaging introduction that would appeal to teenagers, and then also provide a more formal version for a business audience?"
      }
    ]
  },
  "variables": {},
  "metadata": {},
  "background": false
}'
The request body includes these key fields:
  • message (required): The user message in A2A format with role and parts array
  • variables (optional): Template variables for dynamic content substitution
  • metadata (optional): Custom metadata to track the request
  • background (optional): Set to true for asynchronous execution, false for synchronous (waits for completion)
  • contact (optional): Include user/contact information for context
  • thread (optional): Reference previous conversation threads for continuity
  • memory (optional): Entity-based memory for personalized context
For the basic orchestrator invocation, only the message field is required.