You can also create an Evaluator using the API, see Creating an Evaluator via the API.
+ button and select Evaluator.
The following modal opens:

Select the Evaluator type
Configure Model & Output
Select then the model you would like to use to evaluate the output (the model needs to be enabled in your Model Garden). Choose which type of output your model evaluation will provide:- Boolean, if the evaluation generates a True/False response.
- Number, if the evaluation generates a Score.
Configure Prompt
Your prompt has access to the following string variables:- {{log.input}}contains the last message sent to the model
- {{log.output}}contains the output response generated by the evaluated model
- {{log.messages}}contains the messages sent to the model, without the last message
- {{log.retrievals}}contains Knowledge Base retrievals.
- {{log.reference}}contains the reference used to compare output
Example
Evaluating the Familiarity of an outputTesting an Evaluator
Within the Studio, a Playground is available to test an evaluator against any output. This helps validates quickly that an evaluator is behaving correctly To do so, first configure the request:
Here you can configure the LLM payload that will be sent to an evaluator.

A LLM Evaluator test response.
Guardrail Configuration
Within a Deployment, you can use your LLM Evaluator as a Guardrail, effectively permitting a validation on input and output for a deployment generation. Enabling the Guardrail toggle will block payloads that don’t meet a score or expected boolean response. Once created the Evaluator will be available to use in Deployments, to learn more, see Evaluators & Guardrails in Deployments.LLM Evaluator Python Evaluator