Evaluators are automated tools designed to assess the performance and outputs of models within an Experiment, Deployment, and Agent. Evaluators can verify outputs against reference data, ensure compliance with specific criteria, and perform various automated validations. By utilizing Evaluators, teams can automate the validation process, maintain high-quality outputs, and ensure that their AI systems operate within desired parameters.Documentation Index
Fetch the complete documentation index at: https://docs.orq.ai/llms.txt
Use this file to discover all available pages before exploring further.
Quality Tracking: Evaluator results are automatically tracked in your traces. For human feedback and annotations, use the Annotations system. Add human feedback programmatically via the Annotations API or manage human review workflows with Annotation Queues in the AI Studio UI.
Create an Evaluator
Use an Evaluator
Evaluator Library
- Import existing Evaluators from Evaluator Library
- Use the Evaluator Library via the API
- Learn more about our different Evaluator types: