Prerequisites
Before creating an Experiment, you need a Dataset. This dataset contains the Inputs, Messages, and Expected Outputs used for running an Experiment.- Inputs – Variables that can be used in the prompt message, e.g.
{{firstname}}. - Messages – The prompt template, structured with system, user, and assistant roles.
- Expected Outputs – Reference responses that evaluators use to compare against newly generated outputs.
You don’t need to include all three entities when uploading a dataset. Depending on your experiment, you can choose to include only inputs, messages, or expected outputs as needed. For example, you can create a dataset with just inputs.
Creating an Experiment
To create an Experiment, head to the orq.ai Studio:- Choose a Project and Folder and select the
+button. - Choose Experiment
Configuring Experiment
Data Entry Configuration

Add Row button.
Each entry’s Inputs, Messages and Expected Outputs can be edited independently by selecting a cell.
Prompt Configuration
Your chosen prompts are displayed as separate column within the Response section. Prompts are assigned a corresponding letter (seeAand B above) to identify their performance and Evaluators results.
To add a new Prompt, open the sidebar and choose +Prompt.
Select the Prompt Name to open the Prompt panel and Configure the Prompt Template.
There are 3 ways to configure your prompt:
- Using the Messages column in the dataset.
- Using the configured Prompt.
- Using a combination of the configured Prompt and the Messages column.

Open the Prompt panel by selecting its name on the left panel. Choose the messages to be sent to the evaluated model using the drop-down (blue) to use the configured Dataset.
To learn more about Prompt Template Configuration, see Creating a Prompt.
Configuring Evaluators
You can choose to add Evaluators to automatically validate and compare outputs. To add a new Evaluator, find the Evaluators panel, choose Add Evaluator.To learn more about Evaluators in Experiments, see Using Evaluator in Experiment.
Configuring Human Reviews
Human Reviews are manual reviews for generated texts to help you classify and rate outputs following your own characteristics. They can be added to your experiment to extend reviews. To add a new Human Review, find the Human Review panel, choose Add Human Review, you can then add an existing Human Review to the experiment.To learn more about Human Review and how to create them, see Human Reviews.

Human Reviews appear as a new column, each output can be reviewed individually. Here, an output is rated.
Using Vision in Experiments
You can also use images in combination with vision models to run an Experiment. Make sure to use the image message block and urls in your dataset. In the example screenshot below, you can see that theimage_block is pointing to {{image_url}} inputs, which will iterate through the URLs in the dataset.

Setting up an Experiment with Images and a Vision model