improved

Human in the Loop

With the improved Human-in-the-loop feature, you have more control over your AI. You can collect feedback from your end users and have your domain experts annotate feedback and corrections on each log for future improvements.

For example: The model generates an output. Your domain expert checks it and sees that it is 95% correct. It flags the output as incomplete and adds a correction to make it 100% correct.

All the checked and corrected logs can be saved to a dataset, allowing you to create curated datasets. These curated datasets can be used to finetune your model.

There are two options to log feedback:

  1. via the Orq.ai user interface
  2. via the API

Use the interactive walkthrough below to see how to monitor, flag, and correct human feedback within Orq.ai


There are three feedback properties: Rating, Defects, and Interactions.

RatingDefectsInteractions
GoodGrammaticalSaved
BadSpellingSelected
HallucinationDeleted
RepetitionShared
InappropriateCopied
Off-TopicReported
Incompleteness
Ambiguity

See the code snippet below as an example of how to log feedback via the API:

client.feedback.report(
    property="defects",
    value=["grammatical", "hallucination"],  # Can include multiple defects
    trace_id="unique-trace-id"
)
const feedbackPayload: FeedbackReport = {
  property: 'defects',
  value: ['grammatical', 'hallucination'], // Can include multiple defects
  trace_id: 'unique-trace-id',
};

📘

For the technical documentation, please see: Node SDK & Python SDK