Orq MCP is live: Use natural language to interrogate traces, spot regressions, and experiment your way to optimal AI configurations. Available in Claude Desktop, Claude Code, Cursor, and more. Start now →
Create and configure AI agents in Orq.ai. Set instructions, select models, attach tools, knowledge bases, memory stores, and guardrails through the AI Studio, API, or Orq MCP.
Name and describe the Agent. Use the AI assistant to pre-configure role and instructions, or choose Start from scratch for full manual control.The Agent Studio opens with a customizable template.
The Agent Studio has three panels:
Instructions Panel (left): Define what the agent does and how it behaves.
Configuration Panel (center): Set up model, tools, context, evaluators, and constraints.
Chat Panel (right): Chat with the agent and test their behaviour.
Save the configuration at any time using the Publish button.
The Agents API provides endpoints for creating, executing, and managing AI agents with support for tools, memory, knowledge bases, and real-time streaming. Payloads follow the A2A Protocol.Prerequisites:
A Project in the workspace (used as the path for resources)
The Instructions panel defines the agent’s behavior, goals, and personality. Write clear, exhaustive instructions to keep behavior consistent across executions.
Use the AI button to generate effective instructions for the agent.
Example: Customer Support Agent
You are an experienced customer support specialist for the SaaS company **{company_name}**.Your job is to provide clear, concise, and accurate answers to customer inquiries about {product_name}.Responses should be brief, no more than **150 words**, and include any necessary next-step actions.**Step-by-Step Instructions**1. **Read the query**: `{customer_query}`.2. **Extract the core problem** (e.g., password reset, API error, pricing).3. **Draft a concise answer**: no more than 150 words.4. **Add suggested next steps**: at most 3 actions the customer can take.5. **End with a friendly closing** and a reminder of available support channels (`{support_contact}`).
The key instruction fields on the agent object:
Field
Description
instructions
Main instructions for the agent’s behavior and goals
role
Agent’s responsibility and coverage, reinforced at execution
description
Used by other agents to discover and delegate to this agent
system_prompt
Additional system-level context injected before execution
Update instructions on an existing agent:
curl -X PATCH https://api.orq.ai/v2/agents/my-agent \ -H "Authorization: Bearer $ORQ_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "instructions": "You are a helpful assistant. Be concise and accurate.", "role": "General Assistant", "description": "A general-purpose assistant for answering questions and completing tasks"}'
Role: Defines the agent’s responsibility and coverage. Sent to the agent during execution to reinforce its perimeter.
Description: Used by other agents in multi-agent setups to understand what this agent can do. Write a detailed description so orchestrators delegate correctly.
Embed reusable content blocks in agent instructions using {{snippet.key}}, where key is the key of a Prompt Snippet in the project. Updating a snippet propagates the change to every agent instruction that references it.
Reference dynamic values in agent instructions using double braces: {{variableName}}. Pass a key-value map in the variables field at invocation time and Orq.ai substitutes each variable before execution.
Orq.ai supports three template engines. Select the Template Engine from the Agent Settings panel:
Text (default): variables use {{double_braces}} syntax.
Jinja: full templating with conditionals, loops, filters, and more.
Mustache: logic-less templating with sections.
Jinja example
Instructions template:
You are a support assistant for {{company_name}}.{% if user_tier == "premium" %}{{customer_name}} is a premium customer. Greet them by name and let them know they have priority support.{% else %}{{customer_name}} is on the free plan. Standard response time is 24 hours.{% endif %}
You are a support assistant for {{company_name}}.{{# is_premium}}{{customer_name}} is a premium customer. Priority support with a 2-hour SLA.{{/ is_premium}}{{^ is_premium}}{{customer_name}} is on the free plan. Standard response time is 24 hours.{{/ is_premium}}
Tools extend the agent’s capabilities by allowing it to interact with external systems, execute code, or fetch information. Add tools from the Tool selection modal.
Declare tools in the settings.tools array when creating or updating an agent.
Add a standard tool to an agent:
Add the google_search and current_date tools to the "research-bot" agent
The assistant uses update_agent with the updated settings.tools array.Add a custom tool by key:
Add the HTTP tool with key "weather_api" to the "weather-bot" agent
The assistant uses update_agent with {"type": "http", "key": "weather_api"} in settings.tools.
Agent instructions must explicitly mention the available tools so the model knows when and how to invoke them.The model will not use tools unless the instructions clearly describe:
What each tool does
When to use it
How to call it
Example: Web Search Agent
You are a research assistant. Your job is to find current information.**Available Tools:**1. **google_search** - Search the internet for information - Use this when you need current information or factual data - Provide clear search queries - Example: When asked "What are the latest AI developments?", call google_search with "latest AI developments 2025"
Define custom functions inline with an OpenAPI-style schema.
{ "type": "function", "key": "get_local_events", "display_name": "Get Local Events", "description": "Retrieves local events for a given city and date", "function": { "name": "get_local_events", "parameters": { "type": "object", "properties": { "city": { "type": "string", "description": "The name of the city" }, "date": { "type": "string", "description": "The date (YYYY-MM-DD)" } }, "required": ["city", "date"] } }}
Unlike Deployments, a Knowledge Base attached to an agent is not queried on every request. The agent decides when to use the query_knowledge_base tool based on context.
The Knowledge Base description must be explicit so the agent knows when to query it.
For more on building Knowledge Bases for Agents, see Knowledge Bases.
Add the knowledge_bases array to the agent configuration. Include the retrieve_knowledge_bases and query_knowledge_base tools so the agent can discover and query them.
curl -X POST https://api.orq.ai/v2/agents \ -H "Authorization: Bearer $ORQ_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "key": "knowledge-agent", "instructions": "Help the user. First use retrieve_knowledge_bases to see what knowledge sources are available, then query_knowledge_base to find relevant information.", "path": "Default/agents", "model": { "id": "openai/gpt-4o" }, "settings": { "max_iterations": 5, "max_execution_time": 600, "tools": [ { "type": "retrieve_knowledge_bases" }, { "type": "query_knowledge_base" } ] }, "knowledge_bases": [ { "knowledge_id": "my_knowledge_base" } ]}'
Agents must use retrieve_knowledge_bases before querying. Guide the agent with instructions like: “First use retrieve_knowledge_bases to see what knowledge sources are available, then query_knowledge_base to find relevant information.”
Memory Stores are created and managed through the API. To learn more, see Using Memory Stores.
To use a Memory Store correctly, a Memory Entity ID must be sent during agent execution. This entity ID scopes memories to a specific user or session.
For more on using Memory Stores with Agents, see the Memory Stores documentation.
Add the memory_stores array to the agent configuration. Include the memory tools so the agent can discover, query, write, and delete memories.
curl -X POST https://api.orq.ai/v2/agents \ -H "Authorization: Bearer $ORQ_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "key": "memory-agent", "instructions": "You have access to user memories. Use retrieve_memory_stores to find what stores are available, then query_memory_store to search for relevant information before responding.", "path": "Default/agents", "model": "openai/gpt-4o", "settings": { "max_iterations": 5, "max_execution_time": 300, "tools": [ { "type": "retrieve_memory_stores" }, { "type": "query_memory_store" }, { "type": "write_memory_store" }, { "type": "delete_memory_document" } ] }, "memory_stores": ["customer_information"]}'
Memory stores do not automatically save all information from conversations. Explicitly instruct the agent what to save. Without clear save instructions, the agent may miss important details.
Pass a memory.entity_id at execution time to scope memories to a specific user or session. See Run Agents for execution details.
Attach a memory store to an agent:
Add the "customer_information" memory store to the "support-bot" agent and include all four memory tools
The assistant uses update_agent with the memory_stores array and the four memory tools in settings.tools.
Evaluators measure agent performance against defined criteria. Guardrails can block execution when an evaluation fails.
Only pre-configured Evaluators can be attached to agents. To see available standard evaluators or create custom ones, see Evaluators.
Click Add Evaluator or Add Guardrail in the Configuration panel.
Select the evaluator type.
Configure evaluation parameters:
Input or Output: whether to evaluate the agent’s input or its output.
Sample Rate (Evaluators only): the fraction of executions that trigger evaluation.
Evaluators run automatically during task execution and provide performance metrics.
Output Guardrails and Streaming: When an agent is invoked with streaming enabled, output guardrails are deactivated because they cannot run on partial chunks.
Attach evaluators and guardrails to an agent using the evaluators and guardrails fields in the create or update payload.For the full schema of evaluator and guardrail configuration, see the Create Agent API reference and Evaluators.
Use PATCH /v2/agents/{key} to add or update evaluators and guardrails on an existing agent without recreating it.
Add an evaluator to an agent:
Add the "response-quality" LLM evaluator to the "support-bot" agent, configured for output evaluation
The assistant uses update_agent with the evaluators field.Add a guardrail:
Add a guardrail to "support-bot" that blocks responses when the toxicity evaluator score is above 0.8
The assistant uses update_agent with the guardrails field.
Control resource usage and execution limits from the Configuration panel.
Constraint
Description
Max Iterations
Maximum number of LLM reasoning iterations per task
Max Execution Time
Maximum time the agent runs (in seconds)
Max Iterations and Max Execution Time compound: an agent requiring many reasoning steps can hit both limits simultaneously. max_execution_time counts only LLM thinking time; tool call and sub-agent call duration is excluded. Start conservative and increase as needed.
Agents are run and scaled by Orq.ai. No infrastructure setup required.
The Versions tab shows the full history of all published agent configurations. Open it by selecting Versions from the tabs in the Agent Studio page.
Each version entry shows the version number, author, timestamp, optional commit message, and any assigned environment badges (e.g. latest, production).
By default, invoking an agent routes to the version tagged latest. To target a specific version, append @version-number to the agent key. Route by environment with @environment-name.
Click on any version to open the Compare Changes view. Use the From and To dropdowns to compare any two versions, including Current (unpublished working changes).
Two view modes are available:
Instructions: diff of agent instructions only.
Snapshot: full JSON diff of the complete agent configuration.
Use the button to toggle between side-by-side and unified diff layouts.