Skip to main content
Important: Project-Level Migration in January
  • We’re migrating key configurations to project level, including Evaluators, Prompts, Human Reviews, Memory Stores, and Knowledge Bases.
  • These entities will only be visible when they’re part of a Project. If you don’t move these entities to a project, they will eventually be deleted.
  • Please review your workspace and migrate any standalone configurations to the appropriate projects before the end of January.
  • To learn more about moving entities in project, see Migrating Entities.
Evaluators are automated tools designed to assess the performance and outputs of models within an Experiment, Deployment, and Agent. Evaluators can verify outputs against reference data, ensure compliance with specific criteria, and perform various automated validations. By utilizing Evaluators, teams can automate the validation process, maintain high-quality outputs, and ensure that their AI systems operate within desired parameters. To get started:

Create an Evaluator

Use an Evaluator

Evaluator Library