Jump to Content
orq.aiHomepageAdmin PortalSign up
ResourcesAPI ReferenceChangelog
HomepageAdmin PortalSign upLog Inorq.ai
Resources
Log In
ResourcesAPI ReferenceChangelog

Documentation

  • Introduction
  • Quick Start
  • Core Concepts

Reference

  • Contact
    • Creating a Contact
    • Using Contact Metrics
  • Dataset
    • Creating a Dataset
    • Creating a Curated Dataset
  • Deployment
    • Creating a Deployment
    • Integrating a Deployment
    • Deployment Versioning
    • Deployment Routing
    • Evaluators & Guardrails in Deployments
    • Deployment Cache
    • Deployment Security and Privacy
    • Including Knowledge Base Retrievals in Invoke
    • Attaching files in a Deployment
  • Evaluator
    • Evaluator Library
    • Creating an Evaluator
    • Function Evaluator
    • HTTP Evaluator
      • Creating an HTTP Evaluator
    • JSON Evaluator
      • Creating a JSON Evaluator
    • LLM Evaluator
      • Creating an LLM Evaluator
    • Python Evaluator
      • Creating a Python Evaluator
    • Ragas Evaluator
  • Experiment
    • Creating an Experiment
    • Using Evaluator in Experiment
    • Running an Experiment
    • Exporting an Experiment
  • Feedback
    • Adding Feedback to Generations
    • Making Corrections on a Generation
    • Feedback Types
  • Hub
  • Integration
    • Adding an Integration
      • Azure OpenAI
      • Amazon Bedrock
      • Google Vertex AI
    • API keys & Endpoints
  • Knowledge Base
    • Creating a Knowledge Base
      • Retrieval Settings
      • Observability
      • Agentic RAG
      • Chunking Strategy
    • Creating a Knowledge Base Programmatically
  • Logs
  • Metrics Dashboard
  • Model Garden
    • Using the Model Garden
    • AI Proxy
  • Playground
    • Creating a Playground
  • Prompt
    • Creating a Prompt
    • Model Parameters
    • Using a Knowledge Base in a Prompt
    • Using a Tool in a Prompt
    • Using Image Generation in a Prompt
    • Using a Prompt Snippet in a Prompt
    • Using Vision in a Prompt
    • Using the Prompt Generator
    • Using Response Format
  • Prompt Snippet
    • Creating a Prompt Snippet
  • Tool
    • Creating a Tool
  • Traces
    • Threads
  • Webhook
    • Creating a Webhook
    • Webhook Events
    • Webhook Security & Validation
    • Webhook Best Practices

Administer

  • Projects
  • Context Attributes
  • Permissions
    • Using Permissions
  • Billing & Usage

COOKBOOKS

  • Intent Classification with Orq.ai
  • Using Third Party Vector Databases with Orq.ai
  • Data Extraction from PDF
  • Creating SQL Queries from Natural Language
  • Image-Based Receipt Extraction with Orq
  • Multilingual FAQ Bot Using RAG with Orq.ai
  • Capturing and Leveraging User Feedback in Orq.ai
  • Chaining Deployments and Running Evaluations

Learn

  • LLM Glossary
  • Model Parameters
  • Message Roles
  • Models Prompting Formatting Guidelines
  • Prompt Engineering Best Practices
Powered by 

Evaluator Library

Suggest Edits

There are many Evaluators available to include in your Projects.

Browse through all Function Evaluators, Ragas Evaluator and LLM Evaluator in the Hub.

Browse through all available [Evaluators](doc:evaluator) and [Prompts](doc:prompt).

From the Hub, use the Add to project button to make an Evaluator available for use in Experiment or Deployment.

Updated 14 days ago