We’re introducing Agent Studio, a comprehensive workspace for building, configuring, and monitoring AI agents with full control and visibility into every aspect of their behavior.
What’s new:

- Visual agent configuration to set up your agents through an intuitive UI where you can define system instructions, select models, and configure runtime constraints including
max iterationsandmax execution time. - Tools can be added directly to your agent’s configuration, including built-in tools like Web Search, Web Scraper, and Current Date, or create custom tools specific to your use case.
- Agent system instruction generator creates system instructions on the fly based on your description of the agent.
- Integrated code snippets provide ready-to-use code in python and nodejs for integrating your configured agents directly into your applications.
- Traces provide a complete view of every agent task, including timestamps, status indicators, execution triggers, and the full agent trace. Multi-turn interactions are tracked and managed using a unique task ID, giving you complete visibility and control over agent activity.
Explore our new Agent Studio features in detail here.
Extend your agents’ capabilities with our comprehensive tool library. Tools enable your agents to interact with external systems, access real-time data, and perform specialized tasks.
What’s available:

- Built-in tools including:
- Web Search - Search the web for real-time information
- Web Scrape - Extract and retrieve content from specific web pages
- Current Date - Access current date and time information
- Memory Stores
- Retrieve memory stores - Access stored memory data
- Query memory store - Search specific memory stores
- Write memory store - Store information in memory
- Delete memory document - Remove specific memory documents
- Knowledge Bases
- Retrieve knowledge bases - Query your knowledge base collections
- Query knowledge base - Search specific knowledge bases
- Multi-agent
- Retrieve agents - Access and reference other agents in your workspace
- Call sub agent - Execute other agents as tools
- Function tools to define JavaScript/TypeScript functions that your agents can execute.
- JSON Schema tools to specify tool behavior and parameters using standard JSON Schema format.
- HTTP tools to connect your agents to external APIs and webhooks for seamless integrations.
- Python tools to define Python-based capabilities and leverage your existing Python code.
- MCP (Model Context Protocol) - Connect to any remote MCP server for advanced context management and external integrations
Learn more about creating and using tools via the UI and SDK at Tools Documentation.
Capture human feedback at scale with our new Annotation Queue. Annotated data powers model fine-tuning, evaluation benchmarks, and quality assurance, now streamlined in a focused interface built for efficient review.
What’s new:

- Centralized annotation interface to review and label LLM outputs in one place.
- Queue management to organize and prioritize items that need review.
- Custom labels you define in Human Feedback to match your evaluation needs.
- Keyboard shortcuts to speed up the annotation process and maintain consistent labeling.
- Progress tracking to monitor annotation completion across your team.
- Create datasets to use annotated data for evaluations or analysis.
To set up annotation workflows and queues, see Annotations for configuration steps and rubric creation.
Trace OpenAI Agents and Assistants API interactions with Orq.ai using OpenTelemetry. Get deep insights into agent performance, token usage, tool utilization, and conversation flows to optimize your AI applications.Key FeaturesPythonWith Function Calling
- Native OpenTelemetry instrumentation for OpenAI Agents SDK
- Automatic tracing of agent creation, execution, and tool calls
- Custom span support for advanced workflow tracking
- Full visibility into multi-agent conversations and handoffs
Want to trace OpenAI Agents? See OpenAI Agents Integration for setup steps, instrumentation options, and custom span examples.
Whether you’re finding the best prompts, models, or configurations for your use case, you can now do it with increased flexibility. Extend existing experiments without rerunning everything by adding new prompts or evaluators independently, and annotate results directly to identify the winning configuration.
New Capabilities:

- Independent column runs to evaluate new prompts or evaluators without rerunning entire experiments.
- Expandable experiments that let you add comparisons incrementally to existing runs.
- In-experiment annotation to enable human assessment of the best configuration directly within the comparison view.
- Flexible feedback options including human review, categorical scoring, numeric ratings, and free text.
- Enhanced comparison view with full synchronization between experiment runs and annotations.
- Time to first token tracking to measure response latency and optimize prompt/model performance.
Discover how to use Experiments to improve your setup and find the best configuration.
Better organize your AI development with improved project workflows. Navigate resources more efficiently with a streamlined folder structure that makes it easier to access and manage all resources in context.

Learn more about organizing resources and navigating project structures at Projects Documentation.

Want deeper insights into your spending? See Billing for cost breakdowns and usage analytics.
Discover and deploy models faster with improved filtering and navigation. Find the right model for your use case with location-based filtering by region and advanced search options that streamline model discovery.
Explore all available models, filter by region, and compare capabilities at Model Garden.
We’ve added support for Bytedance as a new provider, bringing their diffusion-based image models known for ultra-sharp high-resolution generation, consistent compositions, and powerful instruction-based editing—all built on a unified architecture.New Capabilities:
- High-resolution image creation with improved detail, clarity, and visual consistency.
- Advanced image editing through a shared multi-modal architecture.
- Support for new models including:
- SeedEdit-3.0-I2I-250628
- Seeddream-3.0-T2I-250415
- Seeddream-4-0-250828
- Full compatibility across Deployments, Experiments, and the AI Gateway.
Want to try out the new Bytedance models—SeedEdit and Seeddream—for high-resolution generation and advanced editing? Explore them in:
Connect your vector databases for enhanced RAG capabilities. Next to our internal knowledge base solution, you can now seamlessly integrate external knowledge bases directly into your deployments in Orq. Connect with any knowledge base vendor like Pinecone or Weaviate for flexible knowledge retrieval within your agent workflows.
Learn more about integrating internal and external knowledge bases at Knowledge Bases Documentation.