Skip to main content
Evaluators are automated tools designed to assess the performance and outputs of models within an Experiment or Deployment. Evaluators can verify outputs against reference data, ensure compliance with specific criteria, and perform various automated validations. By utilizing Evaluators, teams can automate the validation process, maintain high-quality outputs, and ensure that their AI systems operate within desired parameters. To get started:

Create an Evaluator

Use an Evaluator

Evaluator Library


What’s Next