Jump to Content
orq.aiHomepageAdmin PortalSign up
ResourcesAPI ReferenceChangelog
HomepageAdmin PortalSign upLog Inorq.ai
Resources
Log In
ResourcesAPI ReferenceChangelog

Documentation

  • Introduction
  • Quick Start
  • Core Concepts

Reference

  • Contact
    • Creating a Contact
    • Using Contact Metrics
  • Dataset
    • Creating a Dataset
    • Creating a Curated Dataset
  • Deployment
    • Creating a Deployment
    • Integrating a Deployment
    • Deployment Versioning
    • Deployment Routing
    • Evaluators & Guardrails in Deployments
    • Deployment Cache
    • Deployment Security and Privacy
    • Including Knowledge Base Retrievals in Invoke
    • Attaching files in a Deployment
  • Evaluator
    • Evaluator Library
    • Creating an Evaluator
    • Function Evaluator
    • HTTP Evaluator
      • Creating an HTTP Evaluator
    • JSON Evaluator
      • Creating a JSON Evaluator
    • LLM Evaluator
      • Creating an LLM Evaluator
    • Python Evaluator
      • Creating a Python Evaluator
    • Ragas Evaluator
  • Experiment
    • Creating an Experiment
    • Using Evaluator in Experiment
    • Running an Experiment
    • Exporting an Experiment
    • Use cases for Experiments
  • Feedback
    • Adding Feedback to Generations
    • Making Corrections on a Generation
    • Feedback Types
  • Hub
  • Integration
    • Adding an Integration
      • Azure OpenAI
      • Amazon Bedrock
      • Google Vertex AI
    • API keys & Endpoints
  • Knowledge Base
    • Creating a Knowledge Base
      • Retrieval Settings
      • Observability
      • Agentic RAG
      • Chunking Strategy
    • Creating a Knowledge Base Programmatically
  • Logs
  • Metrics Dashboard
  • Model Garden
    • Using the Model Garden
    • AI Proxy
  • Playground
    • Creating a Playground
  • Projects
  • Prompt
    • Creating a Prompt
    • Message Roles
    • Model Parameters
    • Models Prompting Formatting Guidelines
    • Prompt Engineering Best Practices
    • Using a Knowledge Base in a Prompt
    • Using a Tool in a Prompt
    • Using Image Generation in a Prompt
    • Using a Prompt Snippet in a Prompt
    • Using Vision in a Prompt
    • Using the Prompt Generator
    • Using Response Format
  • Prompt Snippet
    • Creating a Prompt Snippet
  • Tool
    • Creating a Tool
  • Traces
    • Threads
  • Webhook
    • Creating a Webhook
    • Webhook Events
    • Webhook Security & Validation
    • Webhook Best Practices
  • Workspace Settings
    • Billing & Usage
    • Context Attributes
    • Permissions
      • Using Permissions

Learn

  • Cookbooks
    • Capturing and Leveraging User Feedback in Orq.ai
    • Chaining Deployments and Running Evaluations
    • Creating SQL Queries from Natural Language
    • Data Extraction from PDF
    • Image-Based Receipt Extraction with Orq
    • Intent Classification with Orq.ai
    • Multilingual FAQ Bot Using RAG with Orq.ai
    • Using Third Party Vector Databases with Orq.ai
  • LLM Glossary
Powered by 

Evaluator Library

Suggest Edits

There are many Evaluators available to include in your Projects.

Browse through all Function Evaluators, Ragas Evaluator and LLM Evaluator in the Hub.

Browse through all available [Evaluators](doc:evaluator) and [Prompts](doc:prompt).

From the Hub, use the Add to project button to make an Evaluator available for use in Experiment or Deployment.

Updated 20 days ago