Jump to Content
orq.aiHomepageAdmin PortalSign up
ResourcesAPI ReferenceChangelog
HomepageAdmin PortalSign upLog Inorq.ai
Resources
Log In
ResourcesAPI ReferenceChangelog

Documentation

  • Introduction
  • Quick Start
  • Core Concepts

Reference

  • Contact
    • Creating a Contact
    • Using Contact Metrics
  • Dataset
    • Creating a Dataset
    • Creating a Curated Dataset
  • Deployment
    • Creating a Deployment
    • Integrating a Deployment
    • Deployment Versioning
    • Deployment Routing
    • Evaluators & Guardrails in Deployments
    • Deployment Cache
    • Deployment Security and Privacy
    • Including Knowledge Base Retrievals in Invoke
    • Attaching files in a Deployment
  • Evaluator
    • Evaluator Library
    • Creating an Evaluator
    • Function Evaluator
    • HTTP Evaluator
      • Creating an HTTP Evaluator
    • JSON Evaluator
      • Creating a JSON Evaluator
    • LLM Evaluator
      • Creating an LLM Evaluator
    • Python Evaluator
      • Creating a Python Evaluator
    • Ragas Evaluator
  • Experiment
    • Creating an Experiment
    • Using Evaluator in Experiment
    • Running an Experiment
    • Exporting an Experiment
  • Feedback
    • Adding Feedback to Generations
    • Making Corrections on a Generation
    • Feedback Types
  • Hub
  • Integration
    • Adding an Integration
      • Azure OpenAI
      • Amazon Bedrock
      • Google Vertex AI
    • API keys & Endpoints
  • Knowledge Base
    • Creating a Knowledge Base
    • Creating a Knowledge Base Programmatically
    • Chunking Strategy
    • Retrieval Settings
    • Observability
    • Agentic RAG
  • Logs
  • Metrics Dashboard
  • Model Garden
    • Using the Model Garden
    • AI Proxy
  • Playground
    • Creating a Playground
  • Prompt
    • Creating a Prompt
    • Model Parameters
    • Using a Knowledge Base in a Prompt
    • Using a Tool in a Prompt
    • Using Image Generation in a Prompt
    • Using a Prompt Snippet in a Prompt
    • Using Vision in a Prompt
    • Using the Prompt Generator
    • Using Response Format
  • Prompt Snippet
    • Creating a Prompt Snippet
  • Tool
    • Creating a Tool
  • Traces
  • Webhook
    • Creating a Webhook
    • Webhook Events
    • Webhook Security & Validation
    • Webhook Best Practices

Administer

  • Projects
  • Context Attributes
  • Permissions
    • Using Permissions
  • Billing & Usage

COOKBOOKS

  • Intent Classification with Orq.ai
  • Using Third Party Vector Databases with Orq.ai
  • Data Extraction from PDF
  • Creating SQL Queries from Natural Language
  • Image-Based Receipt Extraction with Orq
  • Multilingual FAQ Bot Using RAG with Orq.ai
  • Capturing and Leveraging User Feedback in Orq.ai
  • Chaining Deployments and Running Evaluations

Learn

  • LLM Glossary
  • Model Parameters
  • Message Roles
  • Models Prompting Formatting Guidelines
  • Prompt Engineering Best Practices
Powered by 

Hub

Suggest Edits

The Hub lets you browse and import Evaluators and Prompts from a wide library.

Browse through all available [Evaluators](doc:evaluator) and [Prompts](doc:prompt).

Browse through all available Evaluators and Prompts.

You can add any Prompt or Evaluator from the Hub to any project using the Add to project button.

A modal will open to choose a Project and folder to import the entity in, it will then be accessible to use within Playgrounds, Experiments and Deployments.

Updated 7 days ago