Skip to main content
Execute agents already configured. For building and configuring agents, see Build Agents.

Run Agents

For Python and Node.js client libraries, see Orq SDKs.
Send a message to an agent using the Responses API:
curl -X POST https://api.orq.ai/v2/agents/my-agent/responses \
  -H "Authorization: Bearer $ORQ_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "background": false,
  "message": {
    "role": "user",
    "parts": [
      {
        "kind": "text",
        "text": "Help me plan a microservices architecture for our e-commerce platform."
      }
    ]
  }
}'
With background: false (default), the call waits for the agent to finish and returns a completed task object including an output array:
{
  "id": "01K6D8QESESZ6SAXQPJPFQXPFT",
  "contextId": "0ae12412-cdea-408e-a165-9ff8086db400",
  "kind": "task",
  "status": {
    "state": "completed",
    "timestamp": "2025-09-30T12:14:35.123Z",
    "message": {
      "role": "agent",
      "parts": [{ "kind": "text", "text": "Here's a microservices architecture..." }]
    }
  },
  "output": [
    {
      "parts": [{ "kind": "text", "text": "Here's a microservices architecture..." }]
    }
  ],
  "metadata": {
    "orq_agent_key": "my-agent"
  }
}
Set background: true to return immediately without waiting: the response will have status.state: "submitted" and no output. Use the returned id to continue the conversation.

Pass Variables

Pass variables in the variables field of the execution request:
curl -X POST https://api.orq.ai/v2/agents/my-agent/responses \
  -H "Authorization: Bearer $ORQ_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "message": {
      "role": "user",
      "parts": [{"kind": "text", "text": "I need help with my account."}]
    },
    "variables": {
      "user_name": "John Smith",
      "user_role": "admin",
      "company_name": "Acme Corp"
    }
  }'
To define which variables the agent uses and configure templating, see Build Agents: Variables and Templates.

Attach Files

Attach files in the parts array of the message:
  • Images: Via URL (uri) or base64 encoding (bytes)
  • PDFs: Base64 encoding only (bytes). URI links are not supported for PDFs.
  • MIME Types: Required. Specify the correct mimeType (e.g. image/jpeg, application/pdf).
Verify the chosen model supports the file types in use. Vision models are required for image and PDF processing.
Attach an image via URL:
curl -X POST https://api.orq.ai/v2/agents/image-classifier/responses \
  -H "Authorization: Bearer $ORQ_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "message": {
    "role": "user",
    "parts": [
      {
        "kind": "text",
        "text": "What can you see in this image?"
      },
      {
        "kind": "file",
        "file": {
          "uri": "https://example.com/image.jpg",
          "mimeType": "image/jpeg",
          "name": "image.jpg"
        }
      }
    ]
  }
}'
Attach a PDF via base64: Convert the file to base64 first:
Python
import base64

def file_to_base64(file_path: str) -> str:
    with open(file_path, "rb") as file:
        return base64.b64encode(file.read()).decode("utf-8")

pdf_base64 = file_to_base64("path/to/document.pdf")
Then include it in the message:
{
  "kind": "file",
  "file": {
    "bytes": "<base64-encoded-content>",
    "mimeType": "application/pdf",
    "name": "document.pdf"
  }
}

Continue a Task

After receiving a task response, continue the conversation by passing the previously received task_id in the next request. The agent maintains full context from previous exchanges.When polling task state, a task in input-required state means the agent is waiting and ready for continuation (the agent is in inactive state). Pass the same task_id to resume.
curl -X POST https://api.orq.ai/v2/agents/my-agent/responses \
  -H "Authorization: Bearer $ORQ_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "task_id": "01K6D8QESESZ6SAXQPJPFQXPFT",
  "background": false,
  "message": {
    "role": "user",
    "parts": [
      {
        "kind": "text",
        "text": "Can you expand on the challenges section?"
      }
    ]
  }
}'
The continuation returns a new task ID for the extended conversation. The agent retains full context from all prior turns.

Use Memory Stores

To call the Agent with a memory store, we’ll use the Responses API with an Embedded message and Linked memory.
curl -X POST https://api.orq.ai/v2/agents/agent-memories/responses \
  -H "Authorization: Bearer $ORQ_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "memory": {
    "entity_id": "customer_456"
  },
  "message": {
    "role": "user",
    "parts": [
      {
        "kind": "text",
        "text": "Do you remember what is my name?"
      }
    ]
  }
}'
You can use multiple memory stores per call, ensure that the entity_id sent during the calls maps the same way to all previously declared memory stores during agent creation.

Agent and Task States

Agent execution can take a long time. If the agent appears to be hanging, it is most likely still running. Wait and check the panel again later.
Agent states:
StateDescription
ActiveExecution in progress; continuation requests blocked
InactiveWaiting for user input or tool results; ready for continuation
ErrorExecution failed; continuation blocked
Approval RequiredTool execution requires manual approval (coming soon)
Task states:
StateDescription
SubmittedTask created and queued for execution
WorkingAgent actively processing
Input RequiredWaiting for user input or tool results
CompletedTask finished successfully
FailedTask encountered an error
CanceledTask was manually canceled

Multi-Agent Workflows

Multi-agent workflows are configured at the agent level. Each agent in a team is created individually, then the orchestrator references sub-agents through its team_of_agents configuration.The Description field on each sub-agent is critical: orchestrators use it to decide when to delegate.
To configure multi-agent setups, see Build Agents: Instructions for how to write descriptions that enable effective delegation.

Traces

The Traces tab in the agent page shows execution logs filtered to the agent automatically.
Trace data includes:
  • Execution history with timestamps
  • Input and output for each call
  • Token usage and cost per execution
  • Execution duration and performance metrics
  • Errors and debugging information
  • Tool calls executed (function, HTTP, code, or MCP calls)
  • Knowledge retrieval results and RAG context
  • Memory store interactions

Trace Views

Each trace can be inspected in three views:
The Trace view shows the full execution tree for a single agent run. Each step is displayed hierarchically, including LLM calls, tool invocations, knowledge retrievals, and memory interactions.

Creating Custom Views

Save frequently used filter combinations as reusable views:
  1. Set the desired filters.
  2. Click All Rows (top right).
  3. Select Create New View.
  4. Give the view a title.
  5. Optionally check Set view private (default is shared with project members).
For advanced filtering and cross-agent analysis, see Traces.