Orq MCP is live: Use natural language to interrogate traces, spot regressions, and experiment your way to optimal AI configurations. Available in Claude Desktop, Claude Code, Cursor, and more. Start now →
Set background: true to return immediately without waiting: the response will have status.state: "submitted" and no output. Use the returned id to continue the conversation.
After receiving a task response, continue the conversation by passing the previously received task_id in the next request. The agent maintains full context from previous exchanges.When polling task state, a task in input-required state means the agent is waiting and ready for continuation (the agent is in inactive state). Pass the same task_id to resume.
curl -X POST https://api.orq.ai/v2/agents/my-agent/responses \ -H "Authorization: Bearer $ORQ_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "task_id": "01K6D8QESESZ6SAXQPJPFQXPFT", "background": false, "message": { "role": "user", "parts": [ { "kind": "text", "text": "Can you expand on the challenges section?" } ] }}'
The continuation returns a new task ID for the extended conversation. The agent retains full context from all prior turns.
To call the Agent with a memory store, we’ll use the Responses API with an Embedded message and Linked memory.
curl -X POST https://api.orq.ai/v2/agents/agent-memories/responses \ -H "Authorization: Bearer $ORQ_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "memory": { "entity_id": "customer_456" }, "message": { "role": "user", "parts": [ { "kind": "text", "text": "Do you remember what is my name?" } ] }}'
You can use multiple memory stores per call, ensure that the entity_id sent during the calls maps the same way to all previously declared memory stores during agent creation.
Run an agent on a recurring or one-off cadence without holding open an HTTP connection. Each scheduled run follows the same execution path, tracing, and billing as a direct API call.
Open the agent and go to the Schedules tab. Click New schedule to open the form.
Field
Description
Name
A display label for the schedule in the UI. Required. Not sent to the agent.
Frequency
Hourly, Daily, or Weekly.
Time (UTC)
The hour the schedule fires. Shown for Daily and Weekly.
Pick the day
Day of the week to fire. Shown for Weekly only.
Summary
Auto-generated human-readable description of the schedule.
Input
The user message sent to the agent on each firing. Required, since every agent invocation needs a user message.
Variables
Key-value pairs passed to the agent on each run. See below.
Metadata
Key-value pairs attached to every response this schedule generates. See below.
VariablesUse the Variables section to define values that the agent needs on each run. Variables are sent alongside the input as a distinct payload field, and can be consumed by the agent’s instructions, any configured tool, or a subagent wherever the variable is wired up.
For example, a support agent with an HTTP tool that looks up a customer in an external system can receive customer_id=1234 from the schedule and use it to query the right record on every run. See the screenshot below.
Variables cannot be referenced inside the Input field itself. Wire them into the agent’s instructions, a tool, or a subagent instead.MetadataUse the Metadata section to attach arbitrary key-value pairs to every response generated by this schedule. Metadata is not passed to the agent: it is stored on the trace and can be used to filter traces in Observability, identify which schedule triggered a run, or tag responses for downstream processing.Click Create to activate the schedule. It starts firing at the next matching time.
Three schedule types are supported:
Type
Expression
Example
Fires
interval
@every <duration>
@every 6h
Every 6 hours from creation
cron
6-field cron: sec min hour dom month dow
0 0 9 * * mon-fri
9:00 UTC Mon–Fri
once
@at <RFC3339-UTC>
@at 2026-06-01T09:00:00Z
Exactly once
Predefined descriptors also work: @hourly, @daily, @weekly, @monthly, @yearly. All timestamps are UTC.
Minimum interval is 1 hour. Expressions that fire more frequently (e.g. @every 30m, */5 * * * * *) are rejected at validation time. once schedules are exempt. To run an agent in response to an event rather than a clock, call the Responses API directly.
The TypeScript SDK uses camelCase keys (agentKey, requestBody) and nests the request body under requestBody, while the Python SDK uses flat keyword arguments. Both map to the same wire format.
payload is required. Response (schedule records use _id rather than id):
The instruction the agent runs on each firing. Supports template variables via {{variable}}.
variables
object
Template variable substitution. Use {"secret": true, "value": "..."} for secret values.
memory_entity_id
string
Memory store entity to attach on each run.
metadata
object
Opaque key/value pairs attached to every response this schedule generates.
generation increments each time type or expression changes and resets trigger_count to 0. Use it to distinguish firings before and after a cadence change.Use agent_tag (string) to pin the schedule to a specific agent version. Omit it to always use the active version:
All schedules for the agent are listed in the Schedules tab. Click a schedule row to open its details, including trigger count and last fired time.
# List all schedulescurl https://api.orq.ai/v3/agents/ops_digest/schedules \ -H "Authorization: Bearer $ORQ_API_KEY"# Get a single schedulecurl https://api.orq.ai/v3/agents/ops_digest/schedules/{schedule_id} \ -H "Authorization: Bearer $ORQ_API_KEY"
List returns { "schedules": [...] }, most recent first. The single-schedule response includes trigger_count, last_triggered_at (UTC timestamp string; null before the first firing), and generation.
Payload-only and agent_tag-only changes do not reset the firing cadence and apply to the next regular run. Changing type or expression shifts the cadence from the PATCH time and resets trigger_count to 0.Lifecycle notes:
once schedules: After firing, is_active flips to false automatically. PATCH both is_active: true and a new future expression to re-run:
Missed firings: Not replayed. If the service is unavailable when a schedule fires, that firing is lost. Cron and interval schedules resume on their next scheduled time once service is restored.
Runs the schedule’s payload immediately without affecting its regular cadence. Useful for smoke-testing a new schedule or manually re-running a missed execution.
curl -X POST https://api.orq.ai/v3/agents/ops_digest/schedules/{schedule_id}/execution \ -H "Authorization: Bearer $ORQ_API_KEY"
The run appears in traces as a schedule.<agent_key> leading span roughly 10 seconds later, carrying orq.schedule_id and the full agent execution chain. Schedule-driven cost and token usage appear in usage reports alongside HTTP-invoked runs. Inactive schedules return 400 schedule_inactive.
Multi-agent workflows are configured at the agent level. Each agent in a team is created individually, then the orchestrator references sub-agents through its team_of_agents configuration.The Description field on each sub-agent is critical: orchestrators use it to decide when to delegate.
To configure multi-agent setups, see Build Agents: Instructions for how to write descriptions that enable effective delegation.
Multi-agent workflows use a hierarchical system:
Orchestrator: Main agent that delegates tasks using call_sub_agent.
Sub-agents: Specialized agents for specific functions.
Delegation: Automatic routing based on sub-agent descriptions and capabilities.
Step 1: Create sub-agents.Create each specialized agent individually. The description field drives orchestrator delegation decisions.Step 2: Create the orchestrator.Reference sub-agents in the team_of_agents array. Include retrieve_agents and call_sub_agent tools.
curl -X POST https://api.orq.ai/v2/agents \ -H "Authorization: Bearer $ORQ_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "key": "orchestrator", "role": "Task Coordinator", "description": "Coordinates specialized agents to handle diverse user requests", "instructions": "Answer the user using your sub-agents. Use retrieve_agents to discover available agents, then call_sub_agent to delegate tasks based on their capabilities.", "settings": { "max_iterations": 15, "max_execution_time": 600, "tools": [ { "type": "retrieve_agents" }, { "type": "call_sub_agent" } ] }, "model": "openai/gpt-4o", "path": "Default/agents", "team_of_agents": [ { "key": "specialist-a", "role": "Handles domain A" }, { "key": "specialist-b", "role": "Handles domain B" } ]}'
Step 3: Invoke the orchestrator.Invoke the orchestrator the same way as any other agent. It handles delegation internally.
Orchestrator agents must include retrieve_agents to discover sub-agents before delegating. Add explicit instructions: “Use retrieve_agents to see what specialized agents are available, then call_sub_agent to delegate.”
Update the orchestrator at any time with PATCH /v2/agents/{key} to add or remove sub-agents from team_of_agents.
Find all agents available as sub-agents:
Search for all agents in the Default/agents project
The assistant uses search_entities with type: "agent" to list available agents.Set up an orchestrator:
Create an orchestrator agent that coordinates "youth-agent" and "formal-agent" for tone-matched responses
The assistant uses create_agent with the team_of_agents array and retrieve_agents / call_sub_agent tools.
The Traces tab in the agent page shows execution logs filtered to the agent automatically.
Trace data includes:
Execution history with timestamps
Input and output for each call
Token usage and cost per execution
Execution duration and performance metrics
Errors and debugging information
Tool calls executed (function, HTTP, code, or MCP calls)
Knowledge retrieval results and RAG context
Memory store interactions
All agent executions are automatically traced. Access traces in the AI Studio or via the Traces API.For programmatic trace access, see the Observability documentation.
List recent traces for an agent:
Show me the last 20 traces for "support-bot" sorted by most recent
The assistant uses list_traces with a filter on the agent key.Inspect a specific trace:
Show me the full span details for trace ID 01K6D8QESESZ6SAXQPJPFQXPFT
The assistant uses list_spans to retrieve the full execution tree for that trace.Debug errors:
Find all failed traces for "support-bot" from the last 24 hours and summarize the errors
The assistant uses list_traces filtered by status:=ERROR and time range, then get_span on relevant spans to surface root causes.
The Trace view shows the full execution tree for a single agent run. Each step is displayed hierarchically, including LLM calls, tool invocations, knowledge retrievals, and memory interactions.
The Thread view presents the execution as a conversation thread, showing the sequence of messages exchanged between the user, the agent, and any tools.
The Timeline view shows execution steps plotted against time, making it easy to identify bottlenecks and understand parallel vs sequential operations.