Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.orq.ai/llms.txt

Use this file to discover all available pages before exploring further.

Annotations are structured key-value pairs that capture human feedback on traces and spans in your observability data. They enable quality assessment, human review workflows, and training dataset curation from human-reviewed traces. Use Cases
Capture thumbs up/down ratings, custom scores, or categorical labels on AI responses. Build a feedback loop that surfaces low-quality generations for review.
Flag responses with specific defects (hallucination, off-topic, inappropriate content) using structured annotation keys shared across your team.
Annotate traces with corrections and quality labels, then export curated subsets as training datasets for future experiments.
Route traces to annotation queues for systematic expert review. Combine with Trace Automations to automatically surface traces that meet specific criteria.
Concepts Three concepts work together to form the annotations system:
  • Human Review: defines the schema (key, value type, options) that annotations must conform to
  • Annotations: the actual feedback values applied to a trace or span
  • Annotation Queues: organized workflows for reviewing traces in bulk via the AI Studio

Human Review

Define annotation schemas: keys, value types, and validation rules. Available on all traces and spans in the project once created.

Annotations

Apply structured human feedback to traces and spans via the AI Studio or programmatically via the API.

Annotation Queues

Organize human review workflows. Filter and present relevant traces for review in bulk.

Create Human Review

Human Reviews define the structure and validation rules for annotations. Each annotation key must match an existing Human Review definition in the project.
To create a Human Review, head to Project Settings > Human Review and press the + button.
Human Review settings
Three Human Review types are available:
  • Categorical: button options with custom labels, such as good/bad or saved/deleted
  • Range: a custom scoring slider, for example a scale from 0 to 100
  • Open field: free-form text input for detailed comments
Once created, a Human Review is available on all traces and spans in the project. No additional configuration or filtering required.

Common Annotation Types Legacy

Rate the overall quality of AI responses:
RatingDescription
goodThe response was helpful and accurate.
badThe response was unhelpful or inaccurate.
Identify specific issues with AI responses:
DefectDescription
grammaticalResponses that contain grammatical errors
spellingResponses that contain spelling errors
hallucinationResponses that contain hallucinations or factual inaccuracies
repetitionResponses that contain unnecessary repetition
inappropriateResponses that are deemed inappropriate or offensive
off_topicResponses that do not address the user’s query
incompletenessResponses that are incomplete or partially address the query
ambiguityResponses that are vague or unclear
You can select multiple defects for one response by using an array-type Human Review.

Use Annotations

The annotation capabilities differ between Logs and Traces. Logs support both human feedback and corrections, while Traces only support human feedback annotations.
Navigate to the Traces view and select a single trace. The Annotations panel will be displayed, allowing you to apply human feedback to the AI response.

Create Annotation Queues

Annotation Queues help you organize and apply Human Reviews effectively to relevant incoming traces.
To create an Annotation Queue, head to AI Studio > Annotation Queue.Choose Create Annotation Queue.The following fields are configurable:
  • The Name of the queue
  • The Description of the Annotation Queue
  • The Human Reviews that traces will be reviewed by
Creating Annotation Queue
Annotation Queues can be filled with traces using Trace Automations, which automatically route traces to the appropriate queue based on your configured rules.

Use Annotation Queues

When opening an Annotation Queue, you have easy access to all traces that need feedback.
Annotation Queue
On the right side of the panel, review and apply feedback to the conversation. Use Next to access the next trace in the queue.
You can add any trace manually to a Dataset to use in a future Experiment, to ensure further testing on model behavior.