# Orq.ai Documentation ## Docs - [CLAUDE](https://docs.orq.ai/CLAUDE.md) - [Past Updates](https://docs.orq.ai/changelog/past_product_updates.md): Archive of past Orq.ai product updates and announcements, including Evaluatorq, platform improvements, and earlier feature releases. - [Release 3.12](https://docs.orq.ai/changelog/release-312.md): Release 3.12 introduces human-in-the-loop reviews, response caching for cost optimization, faster root cause analysis, and OpenTelemetry support. - [Release 3.13](https://docs.orq.ai/changelog/release-313.md): Release 3.13 adds Claude Haiku 4.5 to the AI Router and introduces a verified n8n integration for workflow automation with 1,000+ apps. - [Release 3.14](https://docs.orq.ai/changelog/release-314.md): Release 3.14 adds Google Gemini 3 Pro Preview with 1M+ context window and Anthropic structured outputs with JSON schema validation. - [Release 4.0](https://docs.orq.ai/changelog/release-4.md): Release 4.0 introduces Gemini 3 Flash Preview with 1M token context window and GPT Image 1.5 support for AI-powered image generation workflows. - [Release 4.1](https://docs.orq.ai/changelog/release-4-1.md): Release 4.1 introduces the Evaluatorq Python SDK for running experiments from code and a Command Bar for universal search and navigation. - [Release 4.2](https://docs.orq.ai/changelog/release-4-2.md): Release 4.2 launches the sovereign AI Router with EU data residency, single-key access to 300+ models, VPC deployment, and enterprise audit logs. - [Release 4.3](https://docs.orq.ai/changelog/release-4-3.md): Release 4.3 adds AI Router credits with auto top-up and unified billing across 300+ models, plus agent version control and environment support. - [Release 4.4](https://docs.orq.ai/changelog/release-4-4.md): Release 4.4 introduces the Orq MCP Server with 23 tools for managing agents, datasets, experiments, and traces from any MCP-compatible client. - [Release 4.5](https://docs.orq.ai/changelog/release-4-5.md): Release 4.5 adds Jinja2 and Mustache template engines for advanced prompt templating, plus version control for evaluators and tools. - [Release 4.6](https://docs.orq.ai/changelog/release-4-6.md): Release 4.6 launches AI Chat for a unified model interface with agent exposure and workspace configuration, plus webhook integrations. - [Release 4.7](https://docs.orq.ai/changelog/release-4-7.md): Release 4.7 introduces Orq Skills, reusable coding agent workflows for observability setup, agent building, experiments, and trace analysis. - [Release 4.8](https://docs.orq.ai/changelog/release-4-8.md): Release 4.8 introduces Router Policies, Guardrail Rules, and Routing Rules for request-level control, plus categorical evaluators, a redesigned homescreen, and predefined MCP servers. - [AI Router V3 & Agents V3](https://docs.orq.ai/changelog/router_agents_v3.md): Migration guide for AI Router V3 and Agents V3 endpoints. Rewritten in Go for lower latency, reduced memory, and full OpenAI SDK compatibility. - [Annotation Queues](https://docs.orq.ai/docs/administer/annotation-queue.md): Organize traces into annotation queues for structured human review. Assign reviewers, set priorities, and track progress on feedback workflows. - [Admin Keys | Organization](https://docs.orq.ai/docs/administer/api-keys.md): Manage Admin Keys for secure Orq.ai authentication. Create, rotate, and revoke organization-wide keys for production, staging, and development environments. - [Billing and usage tracking](https://docs.orq.ai/docs/administer/billing-usage.md): Monitor LLM usage, costs, and billing cycles across workspaces. Track token consumption, model generation metrics, and spending analytics for AI applications. - [Budgets and spending controls](https://docs.orq.ai/docs/administer/budgets.md): Control AI spending across your organization with per-identity budgets, API key rate limits, and workspace credits to manage costs at every level. - [Context attributes for custom metadata](https://docs.orq.ai/docs/administer/context-attributes.md): Define custom context attributes for LLM tracking and analytics. Attach metadata, tags, and business context to AI requests for advanced filtering and insights. - [Data compliance and privacy](https://docs.orq.ai/docs/administer/data-compliance.md): Understand Orq.ai data handling, privacy practices, and compliance measures. GDPR, SOC 2, and enterprise security standards for AI application data. - [Environments for dev, staging, and production](https://docs.orq.ai/docs/administer/environments.md): Manage multiple environments in Orq.ai workspaces. Separate development, staging, and production configurations for safe AI application deployment workflows. - [Organization settings](https://docs.orq.ai/docs/administer/overview.md): Configure Orq.ai workspace settings. Manage teams, API keys, billing, environments, webhooks, and human review workflows from the organization panel. - [Members and teams](https://docs.orq.ai/docs/administer/permissions/overview.md): Manage team members, roles, and project access in Orq.ai. Configure Admin, Developer, and Researcher roles and organize members into teams. - [RBAC & Permissions](https://docs.orq.ai/docs/administer/permissions/rbac-permissions.md): Configure role-based access control with Admin, Developer, and Researcher roles. Set granular permissions for teams and project resources. - [Workspace Settings](https://docs.orq.ai/docs/administer/workspace-settings.md): Configure your Orq.ai workspace name, default project, and organization-level preferences from the workspace settings panel in the dashboard. - [Build agents with the API](https://docs.orq.ai/docs/agents/agent-api.md): Create and execute AI agents via API. Build autonomous agents with tools, memory, and knowledge bases using Python, TypeScript, or cURL. - [Agent Schedules](https://docs.orq.ai/docs/agents/agent-schedules.md): Run Orq.ai agents on a recurring or one-off cadence using cron, interval, or @at expressions — without holding open an HTTP connection. - [Agent Studio visual builder](https://docs.orq.ai/docs/agents/agent-studio.md): Build AI agents visually without code using Orq.ai Agent Studio. Configure models, tools, knowledge bases, memory, and guardrails through an intuitive UI. - [Build Agents](https://docs.orq.ai/docs/agents/build.md): Create and configure AI agents in Orq.ai. Set instructions, select models, attach tools, knowledge bases, memory stores, and guardrails through the AI Studio, API, or Orq MCP. - [Function Tool Continuation](https://docs.orq.ai/docs/agents/function-tool-continuation.md): Let the model call your own functions — query a database, check inventory, book a flight — and feed the result back to get a natural language answer. - [MCP Servers](https://docs.orq.ai/docs/agents/mcp-servers.md): Attach Model Context Protocol (MCP) servers to Responses API calls — pre-saved by key or inline — and filter exposed tools per request. - [AI agents](https://docs.orq.ai/docs/agents/overview.md): Create AI agents with persistent memory, knowledge integration, and streaming. Build single and multi-agent systems with A2A Protocol. - [Responses API](https://docs.orq.ai/docs/agents/responses-api.md): Create model responses with the OpenResponses-compatible endpoint — tool calling, multi-turn conversations, streaming, and agent invocation. - [Run Agents](https://docs.orq.ai/docs/agents/run.md): Execute AI agents in Orq.ai. Send messages, pass variables, attach files, continue tasks, and trace executions through the AI Studio, API, or Orq MCP. - [Configure AI Chat](https://docs.orq.ai/docs/ai-chat/configuration.md): Control which LLM models and agents are available to end users in AI Chat. Admin-only settings for managing the AI Chat experience. - [AI Chat](https://docs.orq.ai/docs/ai-chat/overview.md): Chat and test AI models in real time directly in Orq.ai, with full conversation history, file attachments, and access to your configured agents. - [Custom analytics reports](https://docs.orq.ai/docs/analytics/custom-reports.md): Create custom analytics dashboards with real-time model usage, cost breakdowns, latency distribution, and error rates. Filter by project and time. - [Home | Monitor Workspace Activity](https://docs.orq.ai/docs/analytics/dashboards.md): Monitor workspace activity with real-time metrics. Track requests, costs, tokens, latency, and error rates across models and deployments. - [Deployment analytics and variant tracking](https://docs.orq.ai/docs/analytics/deployment-analytics.md): View cost, latency (P95, P99), and error rates per deployment variant. Compare metrics across time windows to optimize AI performance. - [Track usage by identity](https://docs.orq.ai/docs/analytics/identity.md): Group AI metrics by user, team, project, or client. Create identities via API to organize analytics and monitor usage patterns across your organization. - [Identity metrics](https://docs.orq.ai/docs/analytics/identity-metrics.md): Attach identity IDs to API calls for granular usage tracking. Monitor metrics per user, team, or project with SDK or direct API integration. - [Annotate AI responses in the AI Studio](https://docs.orq.ai/docs/annotations/ai-studio.md): Capture human feedback on AI responses in the AI Studio. Review traces, apply annotations, and build curated datasets for improvement. - [Annotations API](https://docs.orq.ai/docs/annotations/api.md): Add human feedback and annotations to traces and spans programmatically. Capture quality assessments and corrections on AI responses. - [Annotations for human feedback](https://docs.orq.ai/docs/annotations/overview.md): Add human feedback and annotations to LLM traces and spans. Capture quality assessments and review AI responses via the API or AI Studio. - [Annotation queues for review workflows](https://docs.orq.ai/docs/annotations/queues.md): Organize and manage human review workflows with Annotation Queues. Efficiently review traces, apply annotations, and build curated datasets. - [Collaboration](https://docs.orq.ai/docs/collaboration/overview.md): Comment, discuss, and track changes across deployments, agents, prompts, and other AI entities. Maintain a full audit trail for your team. - [Advanced RAG with multi-source retrieval](https://docs.orq.ai/docs/common-architecture/advanced-rag.md): Build enterprise RAG systems with multi-source retrieval, agentic query enhancement, and quality validation using RAGAS evaluation. - [Agents Framework & API Guide](https://docs.orq.ai/docs/common-architecture/agents-framework-guide.md): Step-by-step guide to building agents with the Orq.ai Agents Framework and API. Covers tools, memory, knowledge bases, and multi-agent patterns. - [AI agent lead qualification pattern](https://docs.orq.ai/docs/common-architecture/ai-agent.md): Build multi-agent systems with Orq.ai. Create specialized agents for lead qualification, CRM integration, and automated workflows using the A2A Protocol. - [Customer support chatbot pattern](https://docs.orq.ai/docs/common-architecture/chatbot.md): Build customer support chatbots with Orq.ai. Create conversational AI with memory, context awareness, and intelligent escalation to human agents. - [AI gateway vs config management](https://docs.orq.ai/docs/common-architecture/gateway-vs-config.md): Compare AI Gateway and Configuration Management integration patterns. Choose the right Orq.ai architecture for your LLM application deployment strategy. - [Common architecture patterns](https://docs.orq.ai/docs/common-architecture/overview.md): Explore proven architecture patterns for building AI applications with Orq.ai. From simple deployments to advanced RAG systems and multi-agent frameworks. - [Simple deployment pattern](https://docs.orq.ai/docs/common-architecture/simple-deployment.md): Implement simple deployment architecture for LLM applications. Quick-start pattern for straightforward AI integration with minimal configuration overhead. - [Simple RAG pattern](https://docs.orq.ai/docs/common-architecture/simple-rag.md): Build a simple RAG system with Orq.ai. Combine knowledge bases with LLMs for accurate, document-grounded responses. Step-by-step implementation guide. - [Control Tower assets](https://docs.orq.ai/docs/control-tower/assets.md): View and manage all AI assets in your workspace from a single page. Browse agents, tools, deployments, and models with usage and cost summaries. - [Getting started with Control Tower](https://docs.orq.ai/docs/control-tower/getting-started.md): Start monitoring your AI agents in Control Tower. Connect a supported framework, send traces, and watch your assets appear automatically. - [Control Tower](https://docs.orq.ai/docs/control-tower/overview.md): Monitor all AI agents in real time with a live dashboard. Track costs, token usage, error rates, and performance across your workspace. - [Conversations](https://docs.orq.ai/docs/conversations/overview.md): Group related messages into conversations to maintain context and history across chat sessions. Create, update, and query conversations via the API. - [Manage datasets via the API](https://docs.orq.ai/docs/datasets/api-usage.md): Create, populate, and manage datasets via API. Add up to 5,000 datapoints per request. Integrate with Python, Node.js, or cURL for automated workflows. - [Create a dataset](https://docs.orq.ai/docs/datasets/creating.md): Build datasets for LLM testing. Add inputs, messages, and expected outputs manually or import from CSV for experiments and evaluations. - [Datasets for evaluation and testing](https://docs.orq.ai/docs/datasets/overview.md): Create datasets to test LLM models at scale. Define inputs, messages, and expected outputs for experiments. Validate model performance across iterations. - [Create a deployment](https://docs.orq.ai/docs/deployments/creating.md): Create production deployments for LLM applications. Configure models, set routing rules, and ship AI to production with one-click deployment from AI Studio. - [Integrate deployments via the API](https://docs.orq.ai/docs/deployments/integration.md): Integrate Orq.ai deployments into your applications. Get code snippets for Python, Node.js, and cURL. Connect with a single line of code. - [Deployments](https://docs.orq.ai/docs/deployments/overview.md): Deploy LLM applications to production with Orq.ai. Configure model routing, versioning, and monitoring. Integrate with one line of code. - [Enterprise API](https://docs.orq.ai/docs/enterprise-api/overview.md): Enterprise API for advanced workspace management, user provisioning, and administrative controls. Coming soon to the Orq.ai platform. - [Enterprise SSO authentication](https://docs.orq.ai/docs/enterprise/auth.md): Configure enterprise Single Sign-On for Orq.ai using Okta or Microsoft Entra ID. Supports OIDC and SAML protocols for secure authentication. - [Network and system requirements](https://docs.orq.ai/docs/enterprise/network-requirement.md): Enterprise network and system requirements for Orq.ai VPC deployment. Firewall rules, bandwidth needs, and infrastructure specifications. - [Audit logs](https://docs.orq.ai/docs/enterprise/organization/audit-logs.md): Monitor organization activities and changes. Track who did what, when they did it, and what was affected for compliance and security. - [Enterprise | Sovereign AI](https://docs.orq.ai/docs/enterprise/sovereign-ai.md): How Orq.ai covers every layer of AI sovereignty: business entity, investors, infrastructure, model routing, ZDR, and data privacy. All from within the European Union. - [Trust center and compliance](https://docs.orq.ai/docs/enterprise/trust.md): Orq.ai Trust Center with security certifications, compliance documentation, and privacy policies. SOC 2, GDPR, and enterprise security standards. - [VPC deployment on AWS or Azure](https://docs.orq.ai/docs/enterprise/vpc-deployment.md): Deploy Orq.ai within your Virtual Private Cloud on AWS or Azure for enhanced security, compliance, data residency, and network isolation. - [Manage evaluators via the API](https://docs.orq.ai/docs/evaluators/api-usage.md): Create and manage evaluators programmatically with the Orq.ai API. Build evaluation pipelines and automate LLM assessment via SDKs. - [Create an evaluator](https://docs.orq.ai/docs/evaluators/creating.md): Build custom evaluators to assess LLM outputs. Create LLM-as-judge, HTTP, Python, or function evaluators in AI Studio for automated quality checks. - [Evaluator library](https://docs.orq.ai/docs/evaluators/library.md): Browse pre-built evaluators in the Orq.ai Hub. Access Function, RAGAS, and LLM evaluators ready to use in experiments, deployments, and agents. - [Evaluators for automated assessment](https://docs.orq.ai/docs/evaluators/overview.md): Automate LLM output evaluation with custom evaluators. Use LLM-as-judge, Python, HTTP, or RAGAS metrics to validate model responses. - [Run Experiments via the API](https://docs.orq.ai/docs/experiments/api.md): Run experiments directly from code. Compare Deployments and Agents side-by-side and view results in your terminal or in Orq's AI Studio. - [Create an experiment](https://docs.orq.ai/docs/experiments/creating.md): Set up LLM experiments with datasets and prompts. Configure inputs, messages, and expected outputs to test model performance at scale. - [Experiments for LLM testing](https://docs.orq.ai/docs/experiments/overview.md): Run experiments to test LLM prompts and models at scale. Compare performance metrics, evaluate outputs, and iterate on prompt configs. - [Add feedback via the API](https://docs.orq.ai/docs/feedback/adding-programmatically.md): Submit feedback programmatically using the Orq.ai API and SDK. Track LLM quality metrics using Trace IDs for automated monitoring and analytics workflows. - [Add feedback to generations](https://docs.orq.ai/docs/feedback/adding-to-generations.md): Capture human feedback on LLM generations. Implement human-in-the-loop workflows to collect ratings, corrections, and quality signals for AI outputs. - [Feedback for LLM quality tracking](https://docs.orq.ai/docs/feedback/overview.md): Implement human-in-the-loop feedback for LLM applications. Review model outputs, make corrections, and build curated datasets for continuous improvement. - [Hub for prompts and evaluators](https://docs.orq.ai/docs/hub/overview.md): Browse and import pre-built prompts and evaluators from Orq.ai's hub. Add templates to your projects and customize them for your use cases. - [n8n | Workflow Automation with Orq.ai](https://docs.orq.ai/docs/integrations/automation/n8n.md): Use the Orq.ai community nodes in n8n to run agents, invoke deployments, and search knowledge bases inside your automation workflows. - [Claude Code | MCP & AI Router](https://docs.orq.ai/docs/integrations/code-assistants/claude-code.md): Integrate Orq.ai with Claude Code CLI using MCP. Access your workspace, manage experiments, and analyze traces from your terminal. - [Claude Desktop | MCP](https://docs.orq.ai/docs/integrations/code-assistants/claude-desktop.md): Connect your Orq.ai workspace to Claude Desktop. Access your workspace, run experiments, and analyze traces directly from the desktop app. - [Codex MCP Integration](https://docs.orq.ai/docs/integrations/code-assistants/codex.md): Integrate Orq.ai with Codex using the Model Context Protocol. Access datasets, experiments, and analytics from your AI coding assistant. - [Cursor MCP Integration](https://docs.orq.ai/docs/integrations/code-assistants/cursor.md): Integrate Orq.ai with Cursor IDE using the Model Context Protocol. Manage experiments, datasets, and analytics directly from your AI-first code editor. - [Code assistant integrations](https://docs.orq.ai/docs/integrations/code-assistants/intro.md): Connect Claude Code, Cursor, Codex, Warp, and Claude Desktop to your Orq.ai workspace using the Model Context Protocol and Orq Skills. - [Orq MCP Server tools and quickstart](https://docs.orq.ai/docs/integrations/code-assistants/mcp.md): Connect AI code assistants to your Orq.ai workspace via the Model Context Protocol. Reference for all 30 available tools with usage examples. - [Orq Skills for code assistants](https://docs.orq.ai/docs/integrations/code-assistants/skills.md): Extend Claude Code, Cursor, Codex, and other AI assistants with reusable Skills and Commands built for the full Build, Evaluate, Optimize lifecycle on Orq.ai. - [Warp MCP Integration](https://docs.orq.ai/docs/integrations/code-assistants/warp.md): Integrate Orq.ai with Warp terminal using the Model Context Protocol. Manage experiments, datasets, and analytics directly from your AI-powered terminal. - [Integrations overview](https://docs.orq.ai/docs/integrations/overview.md): Connect LLM providers, AI frameworks, and code assistants to Orq.ai. Route API calls, send observability traces, and build agents in AI Studio. - [Alibaba](https://docs.orq.ai/docs/integrations/providers/alibaba.md): Route Alibaba Qwen model requests through Orq.ai's AI Router. Set up your API key and access Qwen chat and embedding models with built-in observability. - [Anthropic Claude integration](https://docs.orq.ai/docs/integrations/providers/anthropic.md): Access Claude models through Orq.ai. Use Claude 4.6 Opus, Sonnet, and Claude 4.5 Haiku with enhanced routing, caching, and prompt management capabilities. - [Amazon Bedrock](https://docs.orq.ai/docs/integrations/providers/aws-bedrock.md): Connect Amazon Bedrock to Orq.ai's AI Router. Configure your AWS access key, secret, and region to route Bedrock model requests with observability. - [Azure](https://docs.orq.ai/docs/integrations/providers/azure.md): Import Azure OpenAI models into Orq.ai's AI Router. Set up your Azure AI Studio deployment endpoint and API key for managed model access. - [ByteDance](https://docs.orq.ai/docs/integrations/providers/bytedance.md): Route ByteDance model requests through Orq.ai's AI Router. Set up your API key and access ByteDance LLMs with built-in caching and observability. - [Cerebras](https://docs.orq.ai/docs/integrations/providers/cerebras.md): Route Cerebras inference requests through Orq.ai's AI Router. Set up your API key and access Cerebras fast-inference models with observability. - [Cohere](https://docs.orq.ai/docs/integrations/providers/cohere.md): Route Cohere model requests through Orq.ai's AI Router. Set up your API key and access Command, Embed, and Rerank models with observability. - [Contextual AI](https://docs.orq.ai/docs/integrations/providers/contextual-ai.md): Route Contextual AI model requests through Orq.ai's AI Router. Set up your API key and access RAG-optimized models with built-in observability. - [DeepSeek](https://docs.orq.ai/docs/integrations/providers/deepseek.md): Route DeepSeek model requests through Orq.ai's AI Router. Set up your API key and access DeepSeek reasoning and chat models with observability. - [ElevenLabs](https://docs.orq.ai/docs/integrations/providers/elevenlabs.md): Route ElevenLabs requests through Orq.ai's AI Router. Set up your API key and access text-to-speech and voice synthesis models with observability. - [Fal](https://docs.orq.ai/docs/integrations/providers/fal.md): Route Fal model requests through Orq.ai's AI Router. Set up your API key and access Fal image generation and media models with observability. - [Google AI](https://docs.orq.ai/docs/integrations/providers/google-ai.md): Route Google Gemini model requests through Orq.ai's AI Router. Set up your Google AI Studio API key for chat, embedding, and multimodal models. - [Groq](https://docs.orq.ai/docs/integrations/providers/groq.md): Route Groq model requests through Orq.ai's AI Router. Set up your API key and access Groq's ultra-fast LPU inference with built-in observability. - [H Company](https://docs.orq.ai/docs/integrations/providers/hcompany.md): Route H Company Holo3 model requests through Orq.ai's AI Router. Set up your API key and access Holo3 models with caching and observability. - [Inceptron](https://docs.orq.ai/docs/integrations/providers/inceptron.md): Route Inceptron inference requests through Orq.ai's AI Router. Set up your API key and access leading open-source models through Inceptron's high-performance, OpenAI-compatible API. - [Jina](https://docs.orq.ai/docs/integrations/providers/jina.md): Route Jina model requests through Orq.ai's AI Router. Set up your API key and access Jina embedding and reranking models with observability. - [Leonardo AI](https://docs.orq.ai/docs/integrations/providers/leonardo-ai.md): Route Leonardo AI requests through Orq.ai's AI Router. Set up your API key and access Leonardo's image generation models with observability. - [LiteLLM custom model provider](https://docs.orq.ai/docs/integrations/providers/litellm.md): Import custom models using LiteLLM integration. Connect self-hosted, private, or custom LLM providers to the AI Router for unified model access. - [Mistral](https://docs.orq.ai/docs/integrations/providers/mistral.md): Route Mistral model requests through Orq.ai's AI Router. Set up your API key and access Mistral chat and embedding models with observability. - [Moonshot AI](https://docs.orq.ai/docs/integrations/providers/moonshot-ai.md): Route Moonshot AI model requests through Orq.ai's AI Router. Set up your API key and access Moonshot's language models with observability. - [OpenAI-compatible models](https://docs.orq.ai/docs/integrations/providers/open-ai-like.md): Connect any OpenAI-compatible endpoint to Orq.ai's AI Router. Configure custom base URLs for self-hosted or private LLM providers with observability. - [OpenAI](https://docs.orq.ai/docs/integrations/providers/openai.md): Route OpenAI model requests through Orq.ai's AI Router. Set up your API key and access GPT, DALL-E, and Whisper models with caching and fallbacks. - [LLM Providers](https://docs.orq.ai/docs/integrations/providers/overview.md): Connect your LLM providers to Orq.ai. Set up API keys for OpenAI, Anthropic, Google, and other supported providers to route models through AI Router. - [Perplexity](https://docs.orq.ai/docs/integrations/providers/perplexity.md): Route Perplexity model requests through Orq.ai's AI Router. Set up your API key and access Perplexity's search-augmented models with observability. - [Scaleway](https://docs.orq.ai/docs/integrations/providers/scaleway.md): Route Scaleway inference requests through Orq.ai's AI Router. Set up your API key and access leading open-source models through Scaleway's high-performance, OpenAI-compatible API. - [Tensorix](https://docs.orq.ai/docs/integrations/providers/tensorix.md): Route Tensorix model requests through Orq.ai's AI Router. Access leading open-source LLMs via Tensorix's OpenAI-compatible API with observability. - [Together AI](https://docs.orq.ai/docs/integrations/providers/together-ai.md): Route Together AI model requests through Orq.ai's AI Router. Set up your API key and access open-source LLMs hosted on Together's infrastructure. - [Google Vertex AI](https://docs.orq.ai/docs/integrations/providers/vertex-ai.md): Connect Google Vertex AI to Orq.ai for enterprise Gemini access. Configure service account auth, project billing, and data residency controls. - [X.AI](https://docs.orq.ai/docs/integrations/providers/xai.md): Route X.AI Grok model requests through Orq.ai's AI Router. Set up your API key and access Grok chat and reasoning models with observability. - [Z.ai](https://docs.orq.ai/docs/integrations/providers/z-ai.md): Route Z.ai model requests through Orq.ai's AI Router. Set up your API key and access Z.ai language models with built-in caching and observability. - [Get started with Orq.ai](https://docs.orq.ai/docs/introduction.md): Learn how Orq.ai helps teams build, deploy, and optimize LLM applications. Unified platform for prompt engineering, model routing, RAG, and AI observability. - [Knowledge base API](https://docs.orq.ai/docs/knowledge/api.md): Create, manage, and enrich knowledge bases via API and SDKs. Programmatically add documents, configure embeddings, and build RAG applications with code. - [Create a knowledge base](https://docs.orq.ai/docs/knowledge/creating.md): Build internal knowledge bases for RAG. Upload documents, configure embedding models, add sources, and manage chunks with the visual Knowledge Base editor. - [Knowledge bases for RAG](https://docs.orq.ai/docs/knowledge/overview.md): Build knowledge bases for retrieval-augmented generation. Embed domain data into prompts, configure chunking, and ground AI responses. - [Use knowledge bases in prompts](https://docs.orq.ai/docs/knowledge/using-in-prompt.md): Add knowledge bases to prompts for RAG-enhanced responses. Configure query types, set retrieval parameters, and ground LLM outputs in your domain data. - [Memory stores for persistent context](https://docs.orq.ai/docs/memory-stores/overview.md): Add persistent memory to AI agents. Store and retrieve context across conversations, enable knowledge accumulation, and build personalized agent experiences. - [Memory Stores via the API](https://docs.orq.ai/docs/memory-stores/using-memory-stores.md): Manage memory stores, entities, and documents via the Orq.ai API and SDKs. Create, update, and query persistent agent memory programmatically. - [Memory Stores in the AI Studio](https://docs.orq.ai/docs/memory-stores/using-memory-stores-in-the-ai-studio.md): Create and manage memory stores visually in the AI Studio. Add entities, browse stored memories, and attach persistent context to your agents. - [AI Router API usage](https://docs.orq.ai/docs/model-garden/api-usage.md): Manage the AI Router programmatically via API. List available models, enable providers, and configure LLM access for workspace automation workflows. - [AI Router model catalog](https://docs.orq.ai/docs/model-garden/overview.md): Browse and enable 300+ LLM models from OpenAI, Anthropic, Google, and more. Unified API for all providers with real-time cost and performance data. - [Browse and enable models in the UI](https://docs.orq.ai/docs/model-garden/using-ui.md): Browse and enable LLM models in your workspace. Discover 300+ models from OpenAI, Anthropic, Google, and more through the AI Router interface. - [Command bar for quick actions](https://docs.orq.ai/docs/navigation/command-bar.md): Access quick actions, search docs, and navigate recent entities instantly with the Command Bar using ⌘+K or CTRL+K shortcuts in AI Studio. - [Keyboard Shortcuts](https://docs.orq.ai/docs/navigation/keyboard-shortcuts.md): Complete list of keyboard shortcuts for navigating Orq.ai. Switch between projects, open the command bar, and manage entities faster. - [Insights | Observability](https://docs.orq.ai/docs/observability/insights.md): Explore topics and patterns surfaced from your AI traces to understand usage trends, intents, and model behaviour across your workspace. - [LLM generation logs](https://docs.orq.ai/docs/observability/logs.md): View detailed logs for every LLM generation. Debug failed requests, analyze latency and costs, and reproduce issues with full request/response context. - [AI observability](https://docs.orq.ai/docs/observability/overview.md): Monitor AI applications with Orq.ai observability. Track logs, traces, and threads. Integrate OpenTelemetry for performance debugging. - [Conversation threads in traces](https://docs.orq.ai/docs/observability/threads.md): Group related LLM calls into threads for observability and analysis. Threads are a labeling mechanism and do not store or inject message history. - [Trace Automations | Automated LLM Monitoring](https://docs.orq.ai/docs/observability/trace-automation.md): Automatically act on LLM trace data with rule-based automations. Add traces to datasets, trigger reviews, and scale quality monitoring without manual work. - [LLM traces for debugging](https://docs.orq.ai/docs/observability/traces.md): Explore step-by-step details of every LLM generation. Debug RAG pipelines, evaluators, guardrails, and caching with full workflow visibility and cost tracking. - [The Orq Flow](https://docs.orq.ai/docs/orq-flow.md): A unified workflow for teams to manage the agent lifecycle in a central platform, from planning requirements to operating in production. - [Create a playground session](https://docs.orq.ai/docs/playground/creating.md): Set up interactive playgrounds to test LLM prompts. Compare multiple models side-by-side, adjust parameters, and iterate on prompts in real-time. - [Playground for prompt testing](https://docs.orq.ai/docs/playground/overview.md): Test LLM prompts in Orq.ai's interactive playground. Compare models side-by-side, adjust parameters, and iterate on prompts before deploying to production. - [Project API Key](https://docs.orq.ai/docs/projects/api-keys.md): Access or create a scoped project-level API key from your project settings page. Manage authentication tokens for project-specific AI resources. - [Human Review](https://docs.orq.ai/docs/projects/human-review.md): Set up project-level human review workflows to flag LLM outputs for manual inspection. Configure review criteria, thresholds, and annotation queues. - [Projects in Orq.ai](https://docs.orq.ai/docs/projects/overview.md): Organize AI resources with projects. Group prompts, deployments, agents, and knowledge bases. Manage team access and permissions for isolated environments. - [Create a prompt snippet](https://docs.orq.ai/docs/prompt-snippets/creating.md): Build reusable prompt snippets for LLM applications. Create modular prompt templates with variables for consistent AI behavior across multiple prompts. - [Prompt snippets](https://docs.orq.ai/docs/prompt-snippets/overview.md): Create reusable prompt snippets for LLM applications. Save text blocks to use across multiple prompts and update them all at once with a single edit. - [Manage prompts via the API](https://docs.orq.ai/docs/prompts/api-usage.md): Manage prompts programmatically with the Orq.ai API and SDKs. Create, update, version, and retrieve prompts for automated LLM workflows and CI/CD integration. - [Create a prompt](https://docs.orq.ai/docs/prompts/creating.md): Build prompts for LLM applications with the visual Prompt Studio. Configure models, add variables, set parameters, and deploy to playgrounds and experiments. - [Prompt engineering guide](https://docs.orq.ai/docs/prompts/engineering-guide.md): Master prompt engineering with best practices for LLM optimization. Learn model-specific formatting, structured prompts, and techniques for consistent outputs. - [Prompts management and versioning](https://docs.orq.ai/docs/prompts/overview.md): Create, manage, and version prompts for LLM applications. Configure model parameters, use variables, and integrate across deployments. - [AI prompt generator](https://docs.orq.ai/docs/prompts/prompt-generator.md): Generate optimized prompts with AI assistance. Describe your use case and let the Prompt Generator create structured, effective prompts automatically. - [Prompt templating with Jinja and Mustache](https://docs.orq.ai/docs/prompts/templating.md): Reference for Jinja and Mustache template engines in Orq.ai deployments. Covers variables, conditionals, loops, filters, and macros. - [App tracking for AI requests](https://docs.orq.ai/docs/proxy/app-tracking.md): Track LLM usage by application context. Segment AI analytics, costs, and performance metrics across different apps, features, or user segments for insights. - [Audio speech and transcription](https://docs.orq.ai/docs/proxy/audio.md): Convert text to speech, transcribe audio, and translate audio to English through the AI Router with multiple TTS and STT providers. - [LLM response caching](https://docs.orq.ai/docs/proxy/cache.md): Cache identical LLM requests to reduce latency by 95% and cut API costs. Configure TTL, exact match caching, and optimize response times for repeated queries. - [LLM fallbacks for automatic failover](https://docs.orq.ai/docs/proxy/fallbacks.md): Configure automatic LLM fallbacks for high availability. Retry failed requests with different providers or models when rate limits or errors occur. - [Agno framework integration](https://docs.orq.ai/docs/proxy/frameworks/agno.md): Build production AI agents with Agno and Orq.ai's AI Router. Create tool-using agents with complete observability and access to 300+ LLMs. - [AutoGen multi-agent integration](https://docs.orq.ai/docs/proxy/frameworks/autogen.md): Connect Microsoft AutoGen to the AI Router for multi-agent workflows. Build collaborative AI systems with enhanced routing, caching, and observability. - [AWS Strands Agents integration](https://docs.orq.ai/docs/proxy/frameworks/aws-strands.md): Connect AWS Strands Agents to Orq.ai's AI Router with OpenTelemetry observability. Access 300+ LLMs with built-in reliability and tracing. - [Azure AI Agents integration](https://docs.orq.ai/docs/proxy/frameworks/azure-ai-agents.md): Connect Azure AI Agents to Orq.ai's AI Router for complete observability, built-in reliability, and access to 250+ LLMs across 20+ providers. - [BeeAI framework integration](https://docs.orq.ai/docs/proxy/frameworks/beeai.md): Send BeeAI Framework traces to Orq.ai using OpenTelemetry and OpenInference. Monitor agent workflows, tool calls, and LLM interactions. - [Claude SDK | AI Router & Observability](https://docs.orq.ai/docs/proxy/frameworks/claude-agent-sdk.md): Route Anthropic SDK calls through the orq.ai AI Router. Access fallbacks, caching, load balancing, and cost tracking for all Claude models. - [CrewAI framework integration](https://docs.orq.ai/docs/proxy/frameworks/crewai.md): Connect CrewAI to Orq.ai's AI Router for complete observability, built-in reliability, and access to 300+ LLMs across 20+ providers. - [DSPy framework integration](https://docs.orq.ai/docs/proxy/frameworks/dspy.md): Integrate DSPy with the AI Router for optimized LLM programs. Use Stanford's framework for automatic prompt optimization and reasoning-based AI systems. - [Google ADK](https://docs.orq.ai/docs/proxy/frameworks/google-ai.md): Connect Google Agent Development Kit to Orq.ai via OpenTelemetry. Trace agent workflows, tool calls, and Gemini model interactions. - [Haystack](https://docs.orq.ai/docs/proxy/frameworks/haystack.md): Send Deepset Haystack traces to Orq.ai via OpenTelemetry. Monitor RAG pipelines, retrieval quality, and LLM generation performance. - [Instructor structured output integration](https://docs.orq.ai/docs/proxy/frameworks/instructor.md): Combine the AI Router with Instructor for type-safe LLM responses. Generate Pydantic models and structured JSON outputs with validation and retries. - [LangChain framework integration](https://docs.orq.ai/docs/proxy/frameworks/langchain.md): Connect Langchain to the AI Router for enhanced LLM orchestration. Use Orq.ai as a drop-in provider for chains, agents, and RAG applications. - [LangGraph framework integration](https://docs.orq.ai/docs/proxy/frameworks/langgraph.md): Connect LangGraph to Orq.ai's AI Router for complete observability, built-in reliability, and access to 300+ LLMs across 20+ providers. - [LiteLLM](https://docs.orq.ai/docs/proxy/frameworks/litellm.md): Send LiteLLM traces to Orq.ai using OpenTelemetry instrumentation. Monitor LLM calls, costs, and latency across all providers LiteLLM supports. - [LiveKit Agents integration](https://docs.orq.ai/docs/proxy/frameworks/livekit.md): Connect LiveKit Agents to Orq.ai's AI Router for complete observability, built-in reliability, and access to 300+ LLMs across 20+ providers. - [LlamaIndex Agents integration](https://docs.orq.ai/docs/proxy/frameworks/llama-agents.md): Connect LlamaIndex Agents to Orq.ai's AI Router for complete observability, built-in reliability, and access to 300+ LLMs across 20+ providers. - [LlamaIndex framework integration](https://docs.orq.ai/docs/proxy/frameworks/llamaindex.md): Connect LlamaIndex to Orq.ai's AI Router for complete observability, built-in reliability, and access to 300+ LLMs across 20+ providers. - [Mastra framework integration](https://docs.orq.ai/docs/proxy/frameworks/mastra.md): Connect Mastra to Orq.ai's AI Router for complete observability, built-in reliability, and access to 300+ LLMs across 20+ providers. - [OpenAI SDK drop-in integration](https://docs.orq.ai/docs/proxy/frameworks/openai.md): Use the AI Router with OpenAI SDK. Replace OpenAI base URL with Orq.ai for routing, caching, monitoring, and multi-provider fallback capabilities. - [OpenAI Agents SDK integration](https://docs.orq.ai/docs/proxy/frameworks/openai-agents.md): Connect OpenAI Agents SDK to Orq.ai's AI Router for complete observability, built-in reliability, and access to 300+ LLMs across 20+ providers. - [OpenClaw](https://docs.orq.ai/docs/proxy/frameworks/openclaw.md): Send OpenClaw framework traces to Orq.ai via OpenTelemetry. Monitor agent interactions, tool calls, and LLM performance in real time. - [Framework integrations](https://docs.orq.ai/docs/proxy/frameworks/overview.md): Integrate the AI Router with popular LLM frameworks. Connect Langchain, Vercel AI, DSPy, Instructor, and more for enhanced AI application development. - [Pydantic AI integration](https://docs.orq.ai/docs/proxy/frameworks/pydantic-ai.md): Connect Pydantic AI to Orq.ai's AI Router for complete observability, built-in reliability, and access to 300+ LLMs across 20+ providers. - [Microsoft Semantic Kernel integration](https://docs.orq.ai/docs/proxy/frameworks/semantic-kernel.md): Connect Microsoft Semantic Kernel to Orq.ai's AI Router for complete observability, built-in reliability, and access to 300+ LLMs across 20+ providers. - [SmolAgents integration](https://docs.orq.ai/docs/proxy/frameworks/smolagents.md): Connect HuggingFace SmolAgents to Orq.ai's AI Router for 300+ LLMs and send traces via OpenTelemetry for full agent observability. - [Vercel AI SDK integration](https://docs.orq.ai/docs/proxy/frameworks/vercel-ai.md): Use the AI Router with Vercel AI SDK for streaming LLM responses. Build real-time AI chat interfaces with React Server Components and edge runtime. - [Identity tracking for user analytics](https://docs.orq.ai/docs/proxy/identity-tracking.md): Monitor LLM usage per user or identity. Track individual AI interactions, costs, and patterns for personalization, billing, and user behavior analytics. - [Image generation via AI Router](https://docs.orq.ai/docs/proxy/image-generation.md): Generate, edit, and create image variations through the AI Router. Supports URL and base64 responses with fallbacks, caching, and load balancing. - [Dynamic inputs for runtime configuration](https://docs.orq.ai/docs/proxy/inputs.md): Pass dynamic inputs to LLM prompts at runtime. Configure variables, context, and parameters through the AI Router for flexible prompt execution. - [Knowledge bases via AI Router](https://docs.orq.ai/docs/proxy/knowledge-bases.md): Integrate knowledge bases through the AI Router. Enable RAG retrieval in LLM calls with automatic context injection for enhanced AI responses. - [Load balancing across providers](https://docs.orq.ai/docs/proxy/load-balancing.md): Distribute LLM requests across multiple providers with weighted routing. Optimize costs, run A/B tests, and ensure redundancy with load balancing. - [Multimodal requests via AI Router](https://docs.orq.ai/docs/proxy/multimodal.md): Send images, PDFs, and audio through the AI Router. Generate images and speech. One unified OpenAI-compatible API for all input and output modalities. - [OpenAI-compatible API](https://docs.orq.ai/docs/proxy/openai-compatible-api.md): Use Orq.ai as an OpenAI-compatible API proxy. Access 300+ LLM models with your existing OpenAI SDK by changing only the base URL. Zero code changes required. - [PDF document input for LLMs](https://docs.orq.ai/docs/proxy/pdf-input.md): Send PDF documents to LLMs through the AI Router. Extract text, analyze documents, and process multimodal inputs for AI-powered document workflows. - [Prompt caching for reduced token costs](https://docs.orq.ai/docs/proxy/prompt-caching.md): Cache repeated prompt prefixes at the provider level to reduce input token costs and latency. Supported on Anthropic, OpenAI, and Google models. - [Reasoning models](https://docs.orq.ai/docs/proxy/reasoning.md): Use reasoning and thinking-capable models like o1, o3, and Claude through the AI Router. Configure reasoning effort and token budgets per request. - [Retries and error handling](https://docs.orq.ai/docs/proxy/retries.md): Automatically retry failed LLM requests with exponential backoff. Handle rate limits, server errors, and network issues for resilient AI applications. - [LLM response streaming](https://docs.orq.ai/docs/proxy/streaming.md): Enable real-time streaming for LLM responses. Deliver incremental content for better UX with Server-Sent Events, React hooks, and error handling patterns. - [Structured outputs with JSON schema](https://docs.orq.ai/docs/proxy/structured-outputs.md): Generate type-safe JSON responses with guaranteed schema compliance. Use Zod or Pydantic for validated LLM outputs with full TypeScript/Python support. - [Supported models in AI Router](https://docs.orq.ai/docs/proxy/supported-models.md): Browse LLM models available through the AI Router. Access GPT, Claude, Gemini, and 300+ models from top providers with unified API integration. - [Thread management for grouped requests](https://docs.orq.ai/docs/proxy/thread-management.md): Group related AI Router requests into conversation threads for observability. Threads label related calls together without storing message history. - [Request timeouts](https://docs.orq.ai/docs/proxy/timeouts.md): Set maximum LLM request duration to prevent hanging calls. Configure timeouts per request for chat, batch processing, and streaming with automatic fallback. - [Tool calling and function execution](https://docs.orq.ai/docs/proxy/tool-calling.md): Enable LLMs to call external functions with structured parameters. Build AI agents that interact with APIs, databases, and external services. - [Use managed prompts via AI Router](https://docs.orq.ai/docs/proxy/using-prompts.md): Execute versioned prompts through the AI Router. Call managed prompts by key or ID for consistent LLM behavior without hardcoded strings. - [Vision and image analysis](https://docs.orq.ai/docs/proxy/vision.md): Analyze images with LLMs using Orq.ai's vision API. Support for OCR, chart analysis, document processing, and multimodal AI interactions. - [Web search in Responses API](https://docs.orq.ai/docs/proxy/web-search.md): Give models access to current web information via the Responses API with built-in web search across OpenAI, Anthropic, and Google. - [Quick start guide](https://docs.orq.ai/docs/quick-start.md): Get started with Orq.ai in 5 minutes. Create your first deployment, make LLM API calls, and start building AI applications with step-by-step instructions. - [Activity | AI Router Activity Overview](https://docs.orq.ai/docs/router/activity.md): View your AI Router dashboard with real-time activity and metrics. Monitor API usage, request statistics, and model performance at a glance. - [AI Router API keys](https://docs.orq.ai/docs/router/api-keys.md): Create and manage AI Router API keys with optional cost, token, and rate limits for production, staging, and development environments. - [Auto Router | Intelligent routing in AI Router](https://docs.orq.ai/docs/router/auto-router.md): Automatically route each request to the optimal model based on your optimization strategy. Reduce costs without sacrificing the quality that matters. - [AI Router credits and spending](https://docs.orq.ai/docs/router/credits.md): Credits are the AI Router's budgeting system. Top up your balance, configure auto top-up, set per-key spending limits, and view transaction history. - [AI Router dashboard](https://docs.orq.ai/docs/router/dashboard.md): View the AI Router dashboard for a quick overview of your setup status, recent activity, connected providers, and getting started steps. - [Getting started with the AI Router](https://docs.orq.ai/docs/router/getting-started.md): Route requests across OpenAI, Anthropic, Google, and AWS with a single API. Built-in retries, fallbacks, caching, and knowledge base integration. - [Guardrail Rules | AI Router](https://docs.orq.ai/docs/router/guardrail-rules.md): Configure guardrail rules in the AI Router to validate and control LLM requests and responses with evaluators triggered by CEL conditions. - [AI Router](https://docs.orq.ai/docs/router/overview.md): A unified API gateway to route requests across OpenAI, Anthropic, Google, AWS, and other LLM providers with built-in reliability, caching, and observability. - [Policies | AI Router](https://docs.orq.ai/docs/router/policies.md): Bundle model routing, evaluators, and budget limits into named AI Router policies that you invoke directly from your API calls. - [Providers | AI Router](https://docs.orq.ai/docs/router/providers-overview.md): Connect LLM providers to the AI Router by adding your API keys and endpoints. Supports OpenAI, Anthropic, Google, AWS, and 20+ other providers. - [Routing Rules | AI Router](https://docs.orq.ai/docs/router/routing-rules.md): Use CEL-based routing rules to redirect AI Router requests to different models based on request attributes, evaluated in priority order. - [Settings | AI Router](https://docs.orq.ai/docs/router/settings.md): Configure default models per use case — chat, tool calling, embeddings, and insight — across your organization in the Orq.ai AI Router. - [AI Router traces](https://docs.orq.ai/docs/router/traces.md): Inspect every AI Router request as a detailed trace. View latency, token usage, cache hits, and provider responses for each routed call. - [Models | AI Router](https://docs.orq.ai/docs/router/using-the-router.md): Browse available LLM models and enable them in your AI Router workspace. Filter by provider, capability, and pricing to find the right model. - [Create Tools](https://docs.orq.ai/docs/tools/overview.md): Add function calling to LLM applications with tools. Create HTTP, Python, MCP, or JSON Schema tools to integrate AI models with external APIs and services. - [Build a multi-agent HR system](https://docs.orq.ai/docs/tutorials/agents-API.md): Build a multi-agent HR system with Python. Create specialized agents for benefits, PTO, and policy questions using memory and knowledge. - [Automate Evals & Observability with Claude Code + orq.ai](https://docs.orq.ai/docs/tutorials/automate-evals-and-observability-with-claude-code.md): Build, run, and analyze evaluations with Claude Code and orq.ai MCP. Query observability data and automate eval workflows from your terminal. - [Build a customer support chatbot](https://docs.orq.ai/docs/tutorials/buildingcustomersupportchatwithaigateway.md): Build a production-ready customer support chatbot with streaming, fallbacks, caching, and RAG. Complete Node.js tutorial with knowledge base integration. - [Capture user feedback on LLM responses](https://docs.orq.ai/docs/tutorials/capturing-feedback-with-orq.md): Implement structured user feedback to improve your LLM chatbot. Capture ratings, log defects, and create a continuous learning loop for better AI responses. - [Chain deployments for multi-step workflows](https://docs.orq.ai/docs/tutorials/chaining-deployments.md): Chain multiple LLM deployments for complex workflows. Extract data from receipts, validate with evaluators, and summarize results with step-by-step guidance. - [Cookbooks and tutorials](https://docs.orq.ai/docs/tutorials/cookbooks.md): Step-by-step tutorials for building AI applications with Orq.ai. Covers RAG chatbots, text-to-SQL, PDF extraction, and multi-agent systems. - [Running evaluations in parallel with Evaluatorq](https://docs.orq.ai/docs/tutorials/evaluator-q.md): Run AI experiments from code using Evaluatorq. Compare deployments and agents side-by-side with custom evaluators across any framework. - [Build and compare insurance claims agents with MCP](https://docs.orq.ai/docs/tutorials/insurance-claims-mcp-cookbook.md): Build single-agent and multi-agent insurance claims systems via orq.ai MCP, then compare them with evaluators and a 15-case dataset. - [Build an intent classification chatbot](https://docs.orq.ai/docs/tutorials/intent-classification.md): Build and evaluate an intent classification system with Orq.ai. Categorize user queries for chatbots, customer support, and task automation with Python. - [LLM and AI terminology glossary](https://docs.orq.ai/docs/tutorials/llm-glossary.md): Complete glossary of LLM, LLMOps, and prompt engineering terms. Learn definitions for transformers, fine-tuning, RAG, tokens, embeddings, and 200+ AI concepts. - [Build AI chatbots with Lovable and Orq.ai](https://docs.orq.ai/docs/tutorials/lovable-integration.md): Build AI chatbots with Lovable and Orq.ai. Create RAG-powered FAQ bots using prompt-based development without backend engineering. - [Maintain chat history with a model](https://docs.orq.ai/docs/tutorials/maintaining-history-with-a-model.md): Maintain conversation history with Orq.ai deployments. Build stateful chatbots that remember context across messages with Python and TypeScript examples. - [Build a multilingual FAQ bot with RAG](https://docs.orq.ai/docs/tutorials/multilingual-faq-bot.md): Build a multilingual FAQ chatbot with RAG. Use Orq.ai's Routing Engine to serve multiple languages dynamically without hardcoded logic. - [Extract data from PDFs with LLMs](https://docs.orq.ai/docs/tutorials/pdf-extraction.md): Extract structured data from PDF invoices with AI. Transform unstructured documents into actionable JSON using Orq.ai's vision models and deployment features. - [Prompt management tutorial](https://docs.orq.ai/docs/tutorials/prompt-manager.md): Use Orq.ai as a prompt manager for your LLM calls. Fetch deployment configurations at runtime while keeping control over your infrastructure. - [Extract data from receipts with OCR](https://docs.orq.ai/docs/tutorials/receipt-extraction.md): Extract data from receipt images with AI. Process JPG and PNG files to structured JSON with vendor names, amounts, and dates using Orq.ai vision models. - [Red Teaming LLMs with evaluatorq](https://docs.orq.ai/docs/tutorials/red-teaming.md): Automatically probe your LLM deployments and agents for security vulnerabilities using evaluatorq, the orq.ai red teaming CLI and Python SDK. - [Convert natural language to SQL queries](https://docs.orq.ai/docs/tutorials/text-to-sql.md): Transform natural language into SQL queries with AI. Build a text-to-SQL application that lets non-technical users query databases using plain English. - [Understanding AI operations in Control Tower](https://docs.orq.ai/docs/tutorials/understanding-controltower.md): Non-technical guide to monitoring AI agents, tracking costs, and maintaining control over your AI operations with the Agent Control Tower. - [Use Pinecone and custom vector databases](https://docs.orq.ai/docs/tutorials/using-thirdparty-vectordbs-with-orq.md): Connect Pinecone or other vector databases to Orq.ai for custom RAG. Use external embeddings and retrieval while leveraging Orq.ai's deployment features. - [Webhooks for real-time notifications](https://docs.orq.ai/docs/webhooks/overview.md): Subscribe to Orq.ai events and receive real-time HTTP POST notifications when agents, deployments, prompts, and other resources change. - [orq MCP Server](https://docs.orq.ai/docs/workspace-mcp.md) - [Create schedule](https://docs.orq.ai/reference/agent-schedules/create-schedule.md): Creates a schedule that runs the agent on a recurring or one-off cadence. The minimum firing interval is 1 hour for `cron` and `interval`; `once` schedules are exempt. - [Delete schedule](https://docs.orq.ai/reference/agent-schedules/delete-schedule.md): Permanently removes a schedule from NATS, Mongo, and the Redis cache. - [List schedules](https://docs.orq.ai/reference/agent-schedules/list-schedules.md): Lists all schedules attached to the specified agent, most recent first. - [Retrieve schedule](https://docs.orq.ai/reference/agent-schedules/retrieve-schedule.md): Retrieves a single schedule by ID. - [Trigger schedule execution](https://docs.orq.ai/reference/agent-schedules/trigger-schedule-execution.md): Runs the schedule's payload immediately (≈10 seconds after the request, to stay above the NATS scheduler's minimum deliver-at margin). The schedule's regular cadence is unaffected. Inactive schedules return 400. - [Update schedule](https://docs.orq.ai/reference/agent-schedules/update-schedule.md): Partially updates a schedule. Any omitted field is left unchanged. Changing `expression` or `type` (or reactivating from inactive) re-publishes the NATS schedule and bumps `generation`; payload-only and `agent_tag`-only changes leave the firing cadence in place. - [Create agent](https://docs.orq.ai/reference/agents/create-agent.md): Creates a new agent with the specified configuration, including model selection, instructions, tools, and knowledge bases. Agents are intelligent assistants that can execute tasks, interact with tools, and maintain context through memory stores. The agent can be configured with a primary model and o… - [Create response](https://docs.orq.ai/reference/agents/create-response.md): Initiates an agent conversation and returns a complete response. This endpoint manages the full lifecycle of an agent interaction, from receiving the initial message through all processing steps until completion. Supports synchronous execution (waits for completion) and asynchronous execution (retur… - [Delete agent](https://docs.orq.ai/reference/agents/delete-agent.md): Permanently removes an agent from the workspace. This operation is irreversible and will delete all associated configuration including model assignments, tools, knowledge bases, memory stores, and cached data. Active agent sessions will be terminated, and the agent key will become available for reus… - [Execute an agent task](https://docs.orq.ai/reference/agents/execute-an-agent-task.md): Invokes an agent to perform a task with the provided input message. The agent will process the request using its configured model and tools, maintaining context through memory stores if configured. Supports automatic model fallback on primary model failure, tool execution, knowledge base retrieval,… - [Get response](https://docs.orq.ai/reference/agents/get-response.md): Retrieves the current state of an agent response by task ID. Returns the response output, model information, token usage, and execution status. When the agent is still processing, the output array will be empty and status will be `in_progress`. Once completed, the response includes the full output,… - [List agents](https://docs.orq.ai/reference/agents/list-agents.md): Retrieves a comprehensive list of agents configured in your workspace. Supports pagination for large datasets and returns agents sorted by creation date (newest first). Each agent in the response includes its complete configuration: model settings with fallback options, instructions, tools, knowledg… - [Refresh A2A agent card](https://docs.orq.ai/reference/agents/refresh-a2a-agent-card.md): Fetches the latest agent card from the external A2A agent and updates the cached card in the database. Similar to MCP server refresh functionality. - [Register external A2A agent](https://docs.orq.ai/reference/agents/register-external-a2a-agent.md): Register an external A2A-compliant agent into Orquesta. The agent card will be fetched during registration to validate the agent and cache its capabilities. - [Retrieve agent](https://docs.orq.ai/reference/agents/retrieve-agent.md): Retrieves detailed information about a specific agent identified by its unique key or identifier. Returns the complete agent manifest including configuration settings, model assignments (primary and fallback), tools, knowledge bases, memory stores, instructions, and execution parameters. Use this en… - [Run agent with streaming response](https://docs.orq.ai/reference/agents/run-agent-with-streaming-response.md): Dynamically configures and executes an agent while streaming the interaction in real-time via Server-Sent Events (SSE). Intelligently manages agent versioning by reusing existing agents with matching configurations or creating new versions when configurations differ. Combines the flexibility of inli… - [Run an agent with configuration](https://docs.orq.ai/reference/agents/run-an-agent-with-configuration.md): Executes an agent using inline configuration or references an existing agent. Supports dynamic agent creation where the system automatically manages agent versioning - reusing existing agents with matching configurations or creating new versions when configurations differ. Ideal for programmatic age… - [Stream agent execution in real-time](https://docs.orq.ai/reference/agents/stream-agent-execution-in-real-time.md): Executes an agent and streams the interaction in real-time using Server-Sent Events (SSE). Provides live updates as the agent processes the request, including message chunks, tool calls, and execution status. Perfect for building responsive chat interfaces and monitoring agent progress. The stream c… - [Update agent](https://docs.orq.ai/reference/agents/update-agent.md): Modifies an existing agent's configuration with partial updates. Supports updating any aspect of the agent including model assignments (primary and fallback), instructions, tools, knowledge bases, memory stores, and execution parameters. Only the fields provided in the request body will be updated;… - [Add items to an annotation queue](https://docs.orq.ai/reference/annotation-queue/add-items-to-an-annotation-queue.md) - [Create an annotation queue](https://docs.orq.ai/reference/annotation-queue/create-an-annotation-queue.md) - [Delete an annotation queue](https://docs.orq.ai/reference/annotation-queue/delete-an-annotation-queue.md) - [Edit an annotation queue](https://docs.orq.ai/reference/annotation-queue/edit-an-annotation-queue.md) - [Query items from an annotation queue](https://docs.orq.ai/reference/annotation-queue/query-items-from-an-annotation-queue.md) - [Remove annotation queue items](https://docs.orq.ai/reference/annotation-queue/remove-annotation-queue-items.md) - [Retrieve an annotation queue item](https://docs.orq.ai/reference/annotation-queue/retrieve-an-annotation-queue-item.md) - [Delete all items](https://docs.orq.ai/reference/annotation-queues/delete-all-items.md) - [List annotation queues](https://docs.orq.ai/reference/annotation-queues/list-annotation-queues.md) - [Retrieve an annotation queue](https://docs.orq.ai/reference/annotation-queues/retrieve-an-annotation-queue.md) - [Delete annotation from a span](https://docs.orq.ai/reference/annotations/delete-v2traces-spans-annotation.md): Remove an annotation from a span - [Add annotation to a span](https://docs.orq.ai/reference/annotations/post-v2traces-spans-annotation.md): Annotate a span - [Create speech](https://docs.orq.ai/reference/audio/create-speech.md): Generates audio from the input text. - [Create transcription](https://docs.orq.ai/reference/audio/create-transcription.md): Transcribe audio files to text using the AI Router. Supports multiple audio formats and languages with automatic speech recognition models. - [Create translation](https://docs.orq.ai/reference/audio/create-translation.md): Translate audio files to English text using the AI Router. Converts speech in any supported language to an English text transcription. - [Create chat completion](https://docs.orq.ai/reference/chat/create-chat-completion.md): Creates a model response for the given chat conversation with support for retries, fallbacks, prompts, and variables. - [Parse text](https://docs.orq.ai/reference/chunking/parse-text.md): Split large text documents into smaller, manageable chunks using different chunking strategies optimized for RAG (Retrieval-Augmented Generation) workflows. This endpoint supports multiple chunking algorithms including token-based, sentence-based, recursive, semantic, and specialized strategies. - [Orq SDKs](https://docs.orq.ai/reference/client-libraries.md): Install and configure the official Orq.ai SDKs for Python and Node.js. Authenticate with your API key and start making API calls in minutes. - [Create completion](https://docs.orq.ai/reference/completions/create-completion.md): For sending requests to legacy completion models - [Update user information](https://docs.orq.ai/reference/contacts/create-a-contact.md): Update or add user information to workspace - [Delete a contact](https://docs.orq.ai/reference/contacts/delete-a-contact.md) - [List contacts](https://docs.orq.ai/reference/contacts/list-contacts.md) - [Retrieve a contact](https://docs.orq.ai/reference/contacts/retrieve-a-contact.md) - [Update a contact](https://docs.orq.ai/reference/contacts/update-a-contact.md) - [Update user information](https://docs.orq.ai/reference/contacts/update-user-information.md): Update or add user information to workspace - [Create conversation](https://docs.orq.ai/reference/conversations/create-conversation.md) - [Delete conversation](https://docs.orq.ai/reference/conversations/delete-conversation.md) - [List conversations](https://docs.orq.ai/reference/conversations/list-conversations.md) - [Retrieve conversation](https://docs.orq.ai/reference/conversations/retrieve-conversation.md) - [Update conversation](https://docs.orq.ai/reference/conversations/update-conversation.md) - [Create a datapoint](https://docs.orq.ai/reference/datasets/create-a-datapoint.md): Creates a new datapoint in the specified dataset. - [Create a dataset](https://docs.orq.ai/reference/datasets/create-a-dataset.md): Creates a new dataset in the specified project. - [Delete a datapoint](https://docs.orq.ai/reference/datasets/delete-a-datapoint.md): Permanently deletes a specific datapoint from a dataset. - [Delete a dataset](https://docs.orq.ai/reference/datasets/delete-a-dataset.md): Permanently deletes a dataset and all its datapoints. This action is irreversible. - [Delete all datapoints](https://docs.orq.ai/reference/datasets/delete-all-datapoints.md): Delete all datapoints from a dataset. This action is irreversible. - [List datapoints](https://docs.orq.ai/reference/datasets/list-datapoints.md): Retrieves a paginated list of datapoints from a specific dataset. - [List datasets](https://docs.orq.ai/reference/datasets/list-datasets.md): Retrieves a paginated list of datasets for the current workspace. Results can be paginated using cursor-based pagination. - [Retrieve a datapoint](https://docs.orq.ai/reference/datasets/retrieve-a-datapoint.md): Retrieves a datapoint object - [Retrieve a dataset](https://docs.orq.ai/reference/datasets/retrieve-a-dataset.md): Retrieves a specific dataset by its unique identifier - [Update a datapoint](https://docs.orq.ai/reference/datasets/update-a-datapoint.md): Update a datapoint's input, expected output, or metadata within a dataset. Modify test cases used for evaluation and experiment workflows. - [Update a dataset](https://docs.orq.ai/reference/datasets/update-a-dataset.md): Update a dataset - [Delete v2human evals](https://docs.orq.ai/reference/delete-v2human-evals.md) - [Add metrics](https://docs.orq.ai/reference/deployments/add-metrics.md): Add metrics to a deployment - [Get config](https://docs.orq.ai/reference/deployments/get-config.md): Retrieve the deployment configuration - [Invoke](https://docs.orq.ai/reference/deployments/invoke.md): Invoke a deployment with a given payload - [List all deployments](https://docs.orq.ai/reference/deployments/list-all-deployments.md): Returns a list of your deployments. The deployments are returned sorted by creation date, with the most recent deployments appearing first. - [Stream](https://docs.orq.ai/reference/deployments/stream.md): Stream deployment generation. Only supported for completions and chat completions. - [Create embeddings](https://docs.orq.ai/reference/embeddings/create-embeddings.md): Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms. - [Create an Evaluator](https://docs.orq.ai/reference/evaluators/create-an-evaluator.md): Create a new evaluator to assess LLM outputs. Configure scoring criteria, models, and thresholds for automated quality assessment of AI responses. - [Delete an Evaluator](https://docs.orq.ai/reference/evaluators/delete-an-evaluator.md): Permanently delete an evaluator and all its versions from your workspace. This action removes the evaluator and cannot be undone. - [Get all Evaluators](https://docs.orq.ai/reference/evaluators/get-all-evaluators.md): Retrieve a paginated list of all evaluators in your workspace. Filter by type, status, or project to find specific evaluation configurations. - [Invoke a Custom Evaluator](https://docs.orq.ai/reference/evaluators/invoke-a-custom-evaluator.md): Run an evaluator against a specific input and output pair. Returns a score and optional reasoning based on the evaluator's configured criteria. - [List evaluator versions](https://docs.orq.ai/reference/evaluators/list-evaluator-versions.md): Returns version history for a specific evaluator - [Update an Evaluator](https://docs.orq.ai/reference/evaluators/update-an-evaluator.md): Update an existing evaluator's configuration, scoring criteria, or model settings. Modify evaluation parameters without creating a new version. - [Submit feedback](https://docs.orq.ai/reference/feedback/submit-feedback.md): Submit user feedback on AI-generated responses. Use the feedback endpoint to track quality ratings, corrections, and custom scores programmatically. - [Create file](https://docs.orq.ai/reference/files/create-file.md): Files are used to upload documents that can be used with features like Deployments. - [Delete file](https://docs.orq.ai/reference/files/delete-file.md): Permanently delete a file from your workspace storage. Removes the file content and metadata. This action cannot be undone once completed. - [Download file content](https://docs.orq.ai/reference/files/download-file-content.md): Redirects to a presigned URL for downloading the file content by file ID. - [List all files](https://docs.orq.ai/reference/files/list-all-files.md): Returns a list of the files that your account has access to. orq.ai sorts and returns the files by their creation dates, placing the most recently created files at the top. - [Retrieve a file](https://docs.orq.ai/reference/files/retrieve-a-file.md): Retrieves the details of an existing file object. After you supply a unique file ID, orq.ai returns the corresponding file object. - [Update file](https://docs.orq.ai/reference/files/update-file.md): Updates the metadata of an existing file object. - [Get v2human evals](https://docs.orq.ai/reference/get-v2human-evals.md) - [Get v2human evals 1](https://docs.orq.ai/reference/get-v2human-evals-1.md) - [Create guardrail rule](https://docs.orq.ai/reference/guardrail-rules/create-guardrail-rule.md): Creates a new guardrail rule with expression, guardrails configuration, and timeout settings. - [Delete guardrail rule](https://docs.orq.ai/reference/guardrail-rules/delete-guardrail-rule.md): Deletes an existing guardrail rule by ID. - [Get guardrail rule](https://docs.orq.ai/reference/guardrail-rules/get-guardrail-rule.md): Retrieves the details of an existing guardrail rule by ID. - [List guardrail rules](https://docs.orq.ai/reference/guardrail-rules/list-guardrail-rules.md): Returns a paginated list of guardrail rules for the current project. - [Update guardrail rule](https://docs.orq.ai/reference/guardrail-rules/update-guardrail-rule.md): Partially updates an existing guardrail rule. Only provided fields are updated. - [Create a human review set](https://docs.orq.ai/reference/human-review-sets/create-a-human-review-set.md): Create a new human review set for manual quality evaluation. Configure review criteria and assign evaluators to assess AI-generated responses. - [Delete a human review set](https://docs.orq.ai/reference/human-review-sets/delete-a-human-review-set.md): Permanently delete a human review set and its associated reviews from your workspace. This action removes all review data and cannot be undone. - [Get a human review set by ID](https://docs.orq.ai/reference/human-review-sets/get-a-human-review-set-by-id.md): Retrieve a specific human review set by its ID. Returns the set's configuration, review criteria, assigned evaluators, and completion status. - [Get all human review sets](https://docs.orq.ai/reference/human-review-sets/get-all-human-review-sets.md): List all human review sets in your workspace. Returns metadata and configuration for each human evaluation set available for manual quality review. - [Update a human review set](https://docs.orq.ai/reference/human-review-sets/update-a-human-review-set.md): Update an existing human review set's name, description, or review configuration. Modify criteria and evaluator assignments for ongoing reviews. - [Create an identity](https://docs.orq.ai/reference/identities/create-an-identity.md): Creates a new identity with a unique external_id. If an identity with the same external_id already exists, the operation will fail. Use this endpoint to add users from your system to orq.ai for tracking their usage and engagement. - [Delete an identity](https://docs.orq.ai/reference/identities/delete-an-identity.md): Permanently deletes an identity from your workspace and cleans up associated budget configurations. This action cannot be undone. - [List identities](https://docs.orq.ai/reference/identities/list-identities.md): Retrieves a paginated list of identities in your workspace. Use pagination parameters to navigate through large identity lists efficiently. - [Retrieve an identity](https://docs.orq.ai/reference/identities/retrieve-an-identity.md): Retrieves detailed information about a specific identity using their identity ID or external ID from your system. - [Update an identity](https://docs.orq.ai/reference/identities/update-an-identity.md): Updates specific fields of an existing identity. Only the fields provided in the request body will be updated. - [Create image](https://docs.orq.ai/reference/images/create-image.md): Create an Image - [Create image edit](https://docs.orq.ai/reference/images/create-image-edit.md): Edit an Image - [Create image variation](https://docs.orq.ai/reference/images/create-image-variation.md): Create an Image Variation - [Create a knowledge](https://docs.orq.ai/reference/knowledge-bases/create-a-knowledge.md): Create a new knowledge base for retrieval-augmented generation. Configure chunking, embedding, and search settings for your RAG application. - [Create a new datasource](https://docs.orq.ai/reference/knowledge-bases/create-a-new-datasource.md): Create a new datasource within a knowledge base. Upload files or connect external sources to populate your RAG knowledge base with content. - [Create chunks for a datasource](https://docs.orq.ai/reference/knowledge-bases/create-chunks-for-a-datasource.md): Add new chunks to a datasource in your knowledge base. Upload pre-processed text segments with optional metadata for retrieval-augmented generation. - [Delete a chunk](https://docs.orq.ai/reference/knowledge-bases/delete-a-chunk.md): Delete a specific chunk from a datasource in your knowledge base. Permanently removes the text segment and its embedding from the index. - [Delete multiple chunks](https://docs.orq.ai/reference/knowledge-bases/delete-multiple-chunks.md): Delete multiple chunks from a datasource in bulk. Remove selected text segments from your knowledge base by providing a list of chunk IDs. - [Deletes a datasource](https://docs.orq.ai/reference/knowledge-bases/deletes-a-datasource.md): Deletes a datasource from a knowledge base. Deleting a datasource will remove it from the knowledge base and all associated chunks. This action is irreversible and cannot be undone. - [Deletes a knowledge](https://docs.orq.ai/reference/knowledge-bases/deletes-a-knowledge.md): Deletes a knowledge base. Deleting a knowledge base will delete all the datasources and chunks associated with it. - [Get chunks total count](https://docs.orq.ai/reference/knowledge-bases/get-chunks-total-count.md): Get the total number of chunks in a datasource. Returns the count of indexed text segments available for retrieval in your knowledge base. - [List all chunks for a datasource](https://docs.orq.ai/reference/knowledge-bases/list-all-chunks-for-a-datasource.md): List all chunks within a datasource using cursor-based pagination. Returns chunk content, metadata, and embedding status for each segment. - [List all datasources](https://docs.orq.ai/reference/knowledge-bases/list-all-datasources.md): List all datasources within a knowledge base. Returns metadata, status, and configuration for each datasource in the specified knowledge base. - [List all knowledge bases](https://docs.orq.ai/reference/knowledge-bases/list-all-knowledge-bases.md): Returns a list of your knowledge bases. The knowledge bases are returned sorted by creation date, with the most recent knowledge bases appearing first - [List chunks with offset-based pagination](https://docs.orq.ai/reference/knowledge-bases/list-chunks-with-offset-based-pagination.md): List chunks within a datasource using offset-based pagination. Supports filtering and sorting to locate specific segments in your knowledge base. - [Retrieve a chunk](https://docs.orq.ai/reference/knowledge-bases/retrieve-a-chunk.md): Retrieve a specific chunk from a datasource by its ID. Returns the chunk content, metadata, and embedding status within your knowledge base. - [Retrieve a datasource](https://docs.orq.ai/reference/knowledge-bases/retrieve-a-datasource.md): Retrieve details of a specific datasource including its status, configuration, and chunk count within the specified knowledge base. - [Retrieves a knowledge base](https://docs.orq.ai/reference/knowledge-bases/retrieves-a-knowledge-base.md): Retrieve a knowledge base with the settings. - [Search knowledge base](https://docs.orq.ai/reference/knowledge-bases/search-knowledge-base.md): Search a Knowledge Base and return the most similar chunks, along with their search and rerank scores. Note that all configuration changes made in the API will override the settings in the UI. - [Update a chunk](https://docs.orq.ai/reference/knowledge-bases/update-a-chunk.md): Update an existing chunk's content or metadata within a datasource. Modify the text, keywords, or custom attributes of a specific segment. - [Update a datasource](https://docs.orq.ai/reference/knowledge-bases/update-a-datasource.md): Update a datasource's name, description, or processing configuration. Modify how content is chunked and embedded within your knowledge base. - [Updates a knowledge](https://docs.orq.ai/reference/knowledge-bases/updates-a-knowledge.md): Update an existing knowledge base's name, description, or configuration settings. Modify chunking and search parameters for your RAG pipeline. - [Create a new memory](https://docs.orq.ai/reference/memory-stores/create-a-new-memory.md): Creates a new memory in the specified memory store. - [Create a new memory document](https://docs.orq.ai/reference/memory-stores/create-a-new-memory-document.md): Creates a new document in the specified memory. - [Create memory store](https://docs.orq.ai/reference/memory-stores/create-memory-store.md): Create a new memory store to persist context across AI agent sessions. Configure storage settings for long-term memory and conversation recall. - [Delete a specific memory](https://docs.orq.ai/reference/memory-stores/delete-a-specific-memory.md): Permanently deletes a specific memory. - [Delete a specific memory document](https://docs.orq.ai/reference/memory-stores/delete-a-specific-memory-document.md): Permanently deletes a specific memory document. - [Delete memory store](https://docs.orq.ai/reference/memory-stores/delete-memory-store.md): Permanently delete a memory store, including memories and documents. - [List all documents for a memory](https://docs.orq.ai/reference/memory-stores/list-all-documents-for-a-memory.md): Retrieves a paginated list of documents associated with a specific memory. - [List all memories](https://docs.orq.ai/reference/memory-stores/list-all-memories.md): Retrieves a paginated list of memories for the memory store - [List memory stores](https://docs.orq.ai/reference/memory-stores/list-memory-stores.md): Retrieves a paginated list of memory stores in the workspace. Use cursor-based pagination parameters to navigate through the results. - [Retrieve a specific memory](https://docs.orq.ai/reference/memory-stores/retrieve-a-specific-memory.md): Retrieves details of a specific memory by its ID - [Retrieve a specific memory document](https://docs.orq.ai/reference/memory-stores/retrieve-a-specific-memory-document.md): Retrieves details of a specific memory document by its ID. - [Retrieve memory store](https://docs.orq.ai/reference/memory-stores/retrieve-memory-store.md): Retrieves detailed information about a specific memory store, including its configuration and metadata. - [Update a specific memory](https://docs.orq.ai/reference/memory-stores/update-a-specific-memory.md): Updates the details of a specific memory. - [Update a specific memory document](https://docs.orq.ai/reference/memory-stores/update-a-specific-memory-document.md): Updates the details of a specific memory document. - [Update memory store](https://docs.orq.ai/reference/memory-stores/update-memory-store.md): Update the memory store configuration - [List models](https://docs.orq.ai/reference/models/list-models.md) - [Create moderation](https://docs.orq.ai/reference/moderations/create-moderation.md): Run content moderation on text input using the AI Router. Classify content against safety categories and get scores for harmful content detection. - [Patch v2human evals](https://docs.orq.ai/reference/patch-v2human-evals.md) - [Create policy](https://docs.orq.ai/reference/policies/create-policy.md): Creates a new router policy with model configuration, evaluators, retry settings, and limits. - [Delete policy](https://docs.orq.ai/reference/policies/delete-policy.md): Deletes an existing policy by ID. - [Get policy](https://docs.orq.ai/reference/policies/get-policy.md): Retrieves the details of an existing policy by ID. - [List policies](https://docs.orq.ai/reference/policies/list-policies.md): Returns a paginated list of policies for the current project. - [Update policy](https://docs.orq.ai/reference/policies/update-policy.md): Partially updates an existing policy. Only provided fields are updated. - [Submit feedback](https://docs.orq.ai/reference/post-v2feedback.md): Submit user feedback on AI generations. Record quality ratings, thumbs up/down signals, corrections, and custom scores via the feedback endpoint. - [Submit evaluation feedback](https://docs.orq.ai/reference/post-v2feedbackevaluation.md): Submit automated evaluation feedback for a generation. Attach evaluator scores and reasoning to specific AI response traces programmatically. - [Remove evaluation feedback](https://docs.orq.ai/reference/post-v2feedbackevaluationremove.md): Remove previously submitted evaluation feedback from a generation. Deletes the evaluator score associated with a specific AI response trace. - [Remove feedback](https://docs.orq.ai/reference/post-v2feedbackremove.md): Remove previously submitted user feedback from a generation. Deletes the feedback record associated with a specific AI response trace. - [Post v2human evals](https://docs.orq.ai/reference/post-v2human-evals.md) - [Create a prompt](https://docs.orq.ai/reference/prompts/create-a-prompt.md): Create a new prompt template in your workspace. Define system and user messages, model parameters, and variables for reusable prompt management. - [Delete a prompt](https://docs.orq.ai/reference/prompts/delete-a-prompt.md): Permanently delete a prompt and all its versions from your workspace. This removes the prompt template and cannot be undone. - [List all prompt versions](https://docs.orq.ai/reference/prompts/list-all-prompt-versions.md): Returns a list of your prompt versions. The prompt versions are returned sorted by creation date, with the most recent prompt versions appearing first - [List all prompts](https://docs.orq.ai/reference/prompts/list-all-prompts.md): Returns a list of your prompts. The prompts are returned sorted by creation date, with the most recent prompts appearing first - [Retrieve a prompt](https://docs.orq.ai/reference/prompts/retrieve-a-prompt.md): Retrieves a prompt object - [Retrieve a prompt version](https://docs.orq.ai/reference/prompts/retrieve-a-prompt-version.md): Retrieves a specific version of a prompt by its ID and version ID. - [Update a prompt](https://docs.orq.ai/reference/prompts/update-a-prompt.md): Update an existing prompt's configuration, messages, or model settings. Changes create a new version while preserving the version history. - [Retrieve a remote config](https://docs.orq.ai/reference/remote-configs/retrieve-a-remote-config.md): Retrieve the resolved remote configuration for a deployment. Returns model settings, prompt content, and parameters based on the active variant. - [Create rerank](https://docs.orq.ai/reference/rerank/create-rerank.md): Rerank a list of documents based on their relevance to a query. - [Create response](https://docs.orq.ai/reference/responses/create-response.md): Creates a model response for the given input. - [Create response](https://docs.orq.ai/reference/responses/v3-create-response.md): Creates a model response for the given input. Returns a response object or a stream of server-sent events. - [Retrieve response](https://docs.orq.ai/reference/responses/v3-retrieve-response.md): Retrieves a previously created response by its ID. - [Extract text from images with OCR](https://docs.orq.ai/reference/router/post-v2routerocr.md): Extracts text content while maintaining document structure and hierarchy - [Create routing rule](https://docs.orq.ai/reference/routing-rules/create-routing-rule.md): Creates a new routing rule with expression, models configuration, and priority settings. - [Delete routing rule](https://docs.orq.ai/reference/routing-rules/delete-routing-rule.md): Deletes an existing routing rule by ID. - [Get routing rule](https://docs.orq.ai/reference/routing-rules/get-routing-rule.md): Retrieves the details of an existing routing rule by ID. - [List routing rules](https://docs.orq.ai/reference/routing-rules/list-routing-rules.md): Returns a paginated list of routing rules for the current project, ordered by priority ascending. - [Update routing rule](https://docs.orq.ai/reference/routing-rules/update-routing-rule.md): Partially updates an existing routing rule. Only provided fields are updated. - [Create tool](https://docs.orq.ai/reference/tools/create-tool.md): Creates a new tool in the workspace. - [Delete tool](https://docs.orq.ai/reference/tools/delete-tool.md): Deletes a tool by key. - [Get tool version](https://docs.orq.ai/reference/tools/get-tool-version.md): Returns a specific version of a tool - [List tool versions](https://docs.orq.ai/reference/tools/list-tool-versions.md): Returns version history for a specific tool - [List tools](https://docs.orq.ai/reference/tools/list-tools.md): Lists all workspace tools. By default, returns all tools in a single response. Set `limit` to enable cursor-based pagination with `starting_after` and `ending_before`. - [Retrieve tool](https://docs.orq.ai/reference/tools/retrieve-tool.md): Retrieves a tool by id. - [Update tool](https://docs.orq.ai/reference/tools/update-tool.md): Updates a tool in the workspace. ## OpenAPI Specs - [openapi](https://docs.orq.ai/openapi.json)