Evaluator

Evaluators automate the performance assessment of outputs in Experiments and Deployments.

Evaluators are automated tools designed to assess the performance and outputs of models within an Experiment or Deployment.

Evaluators can verify outputs against reference data, ensure compliance with specific criteria, and perform various automated validations.

By utilizing Evaluators, teams can automate the validation process, maintain high-quality outputs, and ensure that their AI systems operate within desired parameters.

To create started: