Skip to main content

What is Human in the Loop

Human in the Loop is a way for you to collect feedback from your end users and have your domain expert annotate feedback and corrections on each log for future improvement.

Logging Feedback

Logging feedback is especially useful for domain experts to validate an output, flag defects, and mark interactions with responses generated by models. Within any module, by choosing the Logs tab and then selecting a single Log, the Feedback panel will be displayed on the right.

Within the Log panel, the following is available on the right, letting you qualify the selected generation.

Feedback Types

There are three different types of feedback available to you in the Feedback Panel, within your Logs.

The Feedback Panel is always available within Logs.

Rating

Rating lets you generally qualify the quality of the response sent, here are the available values:
RatingDescription
goodThe response was helpful and accurate.
badThe response was unhelpful or inaccurate.

Defects

Defects let you further define what was wrong within the selected generation, here are the available values:
DefectDescription
grammaticalFlag for responses that contain grammatical errors.
spellingFlag for responses that contain spelling errors.
hallucinationFlag for responses that contain hallucinations or factual inaccuracies.
repetitionFlag for responses that contain unnecessary repetition.
inappropriateFlag for responses that are deemed inappropriate or offensive.
off_topicFlag for responses that do not address the user’s query.
incompletenessFlag for responses that are incomplete or partially address the query.
ambiguityFlag for responses that are vague or unclear.
You can select multiple defects for one response

Interactions

Interactions let you qualify how the user interacted with the response, here are the available values:
InteractionDescription
savedIndicates if the user saved the response
selectedIndicates if the user selected this response from multiple options
deletedIndicates if the user deleted or discarded the response
sharedIndicates if the user shared the response with others
copiedIndicates if the user copied the response for use elsewhere
reportedIndicates if the user reported this response for review
You can select multiple interactions for one response

Custom Feedback

You can create custom feedback types, to learn more, see Human Review.
Feedbacks can be submitted using the API, to learn more see Adding Feedback Programmatically.

Making Corrections

To add a correction, first find the desired response by choosing the Logs tab, then select a single Log. Below the generated response, find the Add correction button:
Add correction

The Add correction button is below the Assistant response.

Choosing to add a correction will open a new Correction message in which you can edit manually the response provided by the model. Once you have finished editing, select Save to use that correction.
correction

The corrected text and correction will appear next to one another, the correction is displayed in green.

Corrections are a great way to fine tune your models, to learn more, see Creating a Curated Dataset.