Jump to Content
orq.aiHomepageAdmin PortalSign up
ResourcesAPI ReferenceChangelog
HomepageAdmin PortalSign upLog Inorq.ai
Resources
Log In
v2.0ResourcesAPI ReferenceChangelog
All
Pages
Start typing to search…

Documentation

  • ⭐Introduction
  • ⚡Quick Start

Reference

  • 📊Analytics
    • Contact
      • Creating a Contact
      • Using Contact Metrics
    • Dashboard
    • Deployment Analytics
  • 📔Dataset
    • Creating a Dataset
    • Creating a Curated Dataset
  • 🚀Deployment
    • Creating a Deployment
    • Integrating a Deployment
    • Deployment Versioning
    • Deployment Routing
    • Evaluators & Guardrails in Deployments
    • Deployment Cache
    • Deployment Security and Privacy
    • Including Knowledge Base Retrievals in Invoke
    • Attaching files in a Deployment
  • ☑️Evaluator
    • Evaluator Library
    • Creating an Evaluator
    • Function Evaluator
    • HTTP Evaluator
      • Creating an HTTP Evaluator
    • JSON Evaluator
      • Creating a JSON Evaluator
    • LLM Evaluator
      • Creating an LLM Evaluator
    • Python Evaluator
      • Creating a Python Evaluator
    • Ragas Evaluator
  • 🥼Experiment
    • Creating an Experiment
    • Using Evaluator in Experiment
    • Running an Experiment
    • Exporting an Experiment
    • Use cases for Experiments
  • 🫴Feedback
    • Adding Feedback to Generations
    • Making Corrections on a Generation
    • Feedback Types
    • Adding Feedback Programmatically
  • 🏘️Hub
  • 🔗Integration
    • Adding an Integration
      • Azure OpenAI
      • Amazon Bedrock
      • Google Vertex AI
    • API keys & Endpoints
  • 📖Knowledge Base
    • Creating a Knowledge Base
      • Retrieval Settings
      • Retrieval Observability
      • Agentic RAG
      • Chunking Strategy
    • Creating a Knowledge Base via API
  • 🤖Model Garden
    • Using the Model Garden
    • AI Proxy
  • 🔎Observability
    • Logs
    • Threads
    • Traces
  • 🛝Playground
    • Creating a Playground
  • 📂Projects
  • 💬Prompt
    • Creating a Prompt
    • Message Roles
    • Model Parameters
    • Models Prompting Formatting Guidelines
    • Prompt Engineering Best Practices
    • Using a Knowledge Base in a Prompt
    • Using a Tool in a Prompt
    • Using Image Generation in a Prompt
    • Using a Prompt Snippet in a Prompt
    • Using Vision in a Prompt
    • Using the Prompt Generator
    • Using Response Format
  • 📃Prompt Snippet
    • Creating a Prompt Snippet
  • ⚒️Tool
    • Creating a Tool
  • 🪝Webhook
    • Creating a Webhook
    • Webhook Events
    • Webhook Security & Validation
    • Webhook Best Practices

Administer

  • ⚙️Workspace Settings
    • Billing & Usage
    • Permissions
      • Using Permissions
  • Data Compliance

Tutorials

  • Cookbooks
    • Capturing and Leveraging User Feedback in Orq.ai
    • Chaining Deployments and Running Evaluations
    • Creating SQL Queries from Natural Language
    • Data Extraction from PDF
    • How to connect Orq.ai with your Lovable app
    • Image-Based Receipt Extraction with Orq
    • Intent Classification with Orq.ai
    • Multilingual FAQ Bot Using RAG with Orq.ai
    • Using Third Party Vector Databases with Orq.ai
    • Orq.ai as Prompt Manager
  • LLM Glossary
Powered by 

Evaluator Library

There are many Evaluators available to include in your Projects.

Browse through all Function Evaluators, Ragas Evaluator and LLM Evaluator in the Hub.

Browse through all available [Evaluators](doc:evaluator) and [Prompts](doc:prompt).

From the Hub, use the Add to project button to make an Evaluator available for use in Experiment or Deployment.

Updated 4 days ago


Evaluator
Creating an Evaluator