Skip to main content
Agent Studio
v4.0.0
We’re introducing Agent Studio, a comprehensive workspace for building, configuring, and monitoring AI agents with full control and visibility into every aspect of their behavior.agent_studio_orq.aiWhat’s new:
  • Visual agent configuration to set up your agents through an intuitive UI where you can define system instructions, select models, and configure runtime constraints including max iterations and max execution time.
  • Tools can be added directly to your agent’s configuration, including built-in tools like Web Search, Web Scraper, and Current Date, or create custom tools specific to your use case.
  • Agent system instruction generator creates system instructions on the fly based on your description of the agent.
  • Integrated code snippets provide ready-to-use code in python and nodejs for integrating your configured agents directly into your applications.
  • Traces provide a complete view of every agent task, including timestamps, status indicators, execution triggers, and the full agent trace. Multi-turn interactions are tracked and managed using a unique task ID, giving you complete visibility and control over agent activity.
Explore our new Agent Studio features in detail here.
Tool Library
v4.0.0
Extend your agents’ capabilities with our comprehensive tool library. Tools enable your agents to interact with external systems, access real-time data, and perform specialized tasks.Tool LibraryWhat’s available:
  • Built-in tools including:
    • Web Search - Search the web for real-time information
    • Web Scrape - Extract and retrieve content from specific web pages
    • Current Date - Access current date and time information
    • Memory Stores
      • Retrieve memory stores - Access stored memory data
      • Query memory store - Search specific memory stores
      • Write memory store - Store information in memory
      • Delete memory document - Remove specific memory documents
    • Knowledge Bases
      • Retrieve knowledge bases - Query your knowledge base collections
      • Query knowledge base - Search specific knowledge bases
    • Multi-agent
      • Retrieve agents - Access and reference other agents in your workspace
      • Call sub agent - Execute other agents as tools
  • Function tools to define JavaScript/TypeScript functions that your agents can execute.
  • JSON Schema tools to specify tool behavior and parameters using standard JSON Schema format.
  • HTTP tools to connect your agents to external APIs and webhooks for seamless integrations.
  • Python tools to define Python-based capabilities and leverage your existing Python code.
  • MCP (Model Context Protocol) - Connect to any remote MCP server for advanced context management and external integrations
Tools can be added directly to your agent’s configuration through the “Add tool” interface, providing centralized management of all agent capabilities.
Learn more about creating and using tools via the UI and SDK at Tools Documentation.
Annotation Queue
v4.0.0
Capture human feedback at scale with our new Annotation Queue. Annotated data powers model fine-tuning, evaluation benchmarks, and quality assurance, now streamlined in a focused interface built for efficient review.Annotation QueueWhat’s new:
  • Centralized annotation interface to review and label LLM outputs in one place.
  • Queue management to organize and prioritize items that need review.
  • Custom labels you define in Human Feedback to match your evaluation needs.
  • Keyboard shortcuts to speed up the annotation process and maintain consistent labeling.
  • Progress tracking to monitor annotation completion across your team.
  • Create datasets to use annotated data for evaluations or analysis.
To set up annotation workflows and queues, see Annotations for configuration steps and rubric creation.
OpenAI Agents Integration
v4.0.0
Trace OpenAI Agents and Assistants API interactions with Orq.ai using OpenTelemetry. Get deep insights into agent performance, token usage, tool utilization, and conversation flows to optimize your AI applications.Key Features
  • Native OpenTelemetry instrumentation for OpenAI Agents SDK
  • Automatic tracing of agent creation, execution, and tool calls
  • Custom span support for advanced workflow tracking
  • Full visibility into multi-agent conversations and handoffs
Quick StartEnvironment Setup
  # Set up OTel endpoint and authentication
  export OTEL_EXPORTER_OTLP_ENDPOINT="https://api.orq.ai/v2/otel"
  export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer <ORQ_API_KEY>"
  export OTEL_RESOURCE_ATTRIBUTES="service.name=openai-agents-app,service.version=1.0.0"
  export OTEL_EXPORTER_OTLP_TRACES_PROTOCOL="http/json"
  export OPENAI_API_KEY="<YOUR_OPENAI_API_KEY>"
Python
  # pip install opentelemetry-sdk opentelemetry-exporter-otlp openai-agents orq-ai-sdk
  from opentelemetry.sdk.trace import TracerProvider
  from opentelemetry.sdk.trace.export import BatchSpanProcessor
  from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
  from orq_ai_sdk.openai_agents_instrumentation import OpenAIAgentsInstrumentor
  from agents import Agent, Runner
  
  # Set up OpenTelemetry
  tracer_provider = TracerProvider()
  tracer_provider.add_span_processor(BatchSpanProcessor(OTLPSpanExporter()))
  
  # Instrument OpenAI agents with Orq.ai
  OpenAIAgentsInstrumentor().instrument(tracer_provider=tracer_provider)
  
  # Create and run your agent
  agent = Agent(name="Assistant", instructions="You are a helpful assistant")
  result = Runner.run_sync(agent, "Write a haiku about recursion in programming.")
  print(result.final_output)
With Function Calling
  from agents import Agent, Runner, function_tool
  
  @function_tool
  def get_weather(location: str) -> str:
      """Get weather for a location"""
      return f"The weather in {location} is sunny, 72°F"
  
  agent = Agent(
      name="Weather Assistant",
      instructions="You are a weather assistant. Use the get_weather function.",
      tools=[get_weather]
  )
  
  result = Runner.run_sync(agent, "What's the weather like in Boston?")
  print(result.final_output)
Want to trace OpenAI Agents? See OpenAI Agents Integration for setup steps, instrumentation options, and custom span examples.
Flexible Experiments
v4.0.0
Whether you’re finding the best prompts, models, or configurations for your use case, you can now do it with increased flexibility. Extend existing experiments without rerunning everything by adding new prompts or evaluators independently, and annotate results directly to identify the winning configuration.Comparitive View ExperimentsNew Capabilities:
  • Independent column runs to evaluate new prompts or evaluators without rerunning entire experiments.
  • Expandable experiments that let you add comparisons incrementally to existing runs.
  • In-experiment annotation to enable human assessment of the best configuration directly within the comparison view.
  • Flexible feedback options including human review, categorical scoring, numeric ratings, and free text.
  • Enhanced comparison view with full synchronization between experiment runs and annotations.
  • Time to first token tracking to measure response latency and optimize prompt/model performance.
Discover how to use Experiments to improve your setup and find the best configuration.
Enhanced Projects
v4.0.0
Better organize your AI development with improved project workflows. Navigate resources more efficiently with a streamlined folder structure that makes it easier to access and manage all resources in context.docs/folder_structure
Learn more about organizing resources and navigating project structures at Projects Documentation.
Billing Dashboard
v4.0.0
Billing DashboardGet clearer financial insights with improved data visualization. Better chart alignment makes it easier to analyze costs across time periods and understand your spending patterns at a glance.
Want deeper insights into your spending? See Billing for cost breakdowns and usage analytics.
Model Garden
v4.0.0
Discover and deploy models faster with improved filtering and navigation. Find the right model for your use case with location-based filtering by region and advanced search options that streamline model discovery.
Explore all available models, filter by region, and compare capabilities at Model Garden.
Bytedance Image Models
v4.0.0
We’ve added support for Bytedance as a new provider, bringing their diffusion-based image models known for ultra-sharp high-resolution generation, consistent compositions, and powerful instruction-based editing—all built on a unified architecture.New Capabilities:
  • High-resolution image creation with improved detail, clarity, and visual consistency.
  • Advanced image editing through a shared multi-modal architecture.
  • Support for new models including:
    • SeedEdit-3.0-I2I-250628
    • Seeddream-3.0-T2I-250415
    • Seeddream-4-0-250828
  • Full compatibility across Deployments, Experiments, and the AI Gateway.
Want to try out the new Bytedance models—SeedEdit and Seeddream—for high-resolution generation and advanced editing? Explore them in:
External Knowledge Bases
v4.0.0
Connect your vector databases for enhanced RAG capabilities. Next to our internal knowledge base solution, you can now seamlessly integrate external knowledge bases directly into your deployments in Orq. Connect with any knowledge base vendor like Pinecone or Weaviate for flexible knowledge retrieval within your agent workflows.
Learn more about integrating internal and external knowledge bases at Knowledge Bases Documentation.