Capturing and Leveraging User Feedback in Orq.ai

This cookbook covers how to implement structured feedback logging in Orq.ai to continuously improve your FAQ chatbot’s accuracy, relevance, and user experience.

This cookbook covers how to seamlessly log and manage user feedback in your FAQ chatbot using Orq.ai. Whether you're tracking quality issues, user actions, or overall sentiment, structured feedback logging helps you continuously improve responses and enhance user experience.

Instead of relying on vague insights or manual reviews, Orq.ai’s feedback logging system allows you to:

✅ Capture real-time user ratings (good/bad) on chatbot responses

Log specific defects like grammatical errors, hallucinations, or ambiguity

✅ Maintain structured conversation history to refine future responses

By integrating feedback logging, you create a chatbot that learns from user input and evolves over time—no guesswork, just data-driven improvements!

Step 1: Install Dependencies

Before starting, ensure you have an Orq account. If not, sign up first. Additionally, we’ve prepared a Google Colab file that you can copy and run immediately, simplifying the setup process. Just replace the API key, and you’re ready to go For more advanced topics, check out the Orq documentation.

Start by installing the required packages to use the Orq SDK and manage your knowledge base

pip install orq-ai-sdk 

Step 2: Set Up the Orq Client

Next, set up the Orq client using your API key. Replace the placeholder with your actual API key.

import os
from orq_ai_sdk import Orq

# Initialize Orq (Standalone Block for Initialization)
api_key = os.getenv("ORQ_API_KEY", "your_api_key_here")
client = Orq(api_key=api_key)  # Ensure client is available for chat_with_deployment
orq = client  # Maintain consistency for feedback logging

Step 3: Setting Up a Knowledge Base in Orq.ai

To power the FAQ bot, you'll need a knowledge base containing relevant documents. In Orq.ai, knowledge bases are built using vector embeddings, enabling the bot to retrieve the most relevant information for any query.

For this setup, we scraped our technical documentation and uploaded it to the knowledge base via the Orq platform. Keep in mind that this approach does not ensure continuous updates — any changes to your documentation will need to be manually re-uploaded.

To upload a knowledge base in Orq.ai:

  1. Create a New Knowledge Base in the Orq workspace.
  2. Upload Documents by dragging files.
  3. Process the Files to generate vector embeddings, making your content searchable by the bot.

For a more detailed explanation, see the documentation.

Step 4: Orq FAQ Chat Prompt

This prompt defines the behavior of Orq.ai’s FAQ bot, ensuring responses are accurate, context-driven, and based only on the provided knowledge base. The assistant acts as a customer service agent, delivering factual answers while avoiding speculation.

The prompt includes clear instructions to maintain professionalism:

✅ Use only the knowledge base for answers
✅ Express uncertainty when information is unclear
✅ Avoid opinions or assumptions
✅ Break down complex topics into simple explanations
✅ Use objective, neutral language

This ensures reliable and well-supported answers for users across various contexts.

This is the general prompt in Orq.ai:

### Role
You are a customer service assistant working for Orq.ai specialized in answering questions as accurately and factually as possible given all provided context. If there is no provided context, don’t answer the question but say: “sorry I don’t have information to answer your question”. Your goal is to provide clear, concise, and well-supported answers based on information from a knowledge base.

### Instructions
When responding:
* Express uncertainty on unclear or debatable topics
* Avoid speculation or personal opinions
* Break down complex topics into understandable explanations
* Use objective, neutral language

When asked a question:
ONLY use the following data coming from a knowledge base to answer your question:

<data_you_can_use>
{{orq_technical_docs}}
</data_you_can_use>

Step 5: Define the Interaction Function

The bot will need a function to handle user input, manage conversation memory, and invoke the RAG deployment. Here’s a sample function:

def chat_with_deployment(message, conv_memory, language):
    conv_memory.append({"role": "user", "content": message})

    generation = client.deployments.invoke(
        key="orqai_FAQ_bot_RAGAS",
        context={
            "environments": ["production"],
        },
        metadata={"custom-field-name": "custom-metadata-value"},
        messages=conv_memory
    )

    # Store trace id as this is needed for logging feedback
    id =  generation.id 

    # Extract response content
    response = generation.choices[0].message.content

    # Store response of the model
    conv_memory.append({"role": "assistant", "content": response})

    return response, id  # Return both full_id and extracted id

Step 6: Run Your FAQ Bot

In a real deployment, feedback would be collected through front-end buttons (e.g., thumbs-up/down, dropdowns, or action buttons). For demonstration purposes, we simulate this process in the notebook using text-based inputs.

How the Feedback Loop Works:

  1. User Rating – After each response, users mark it as good or bad to signal quality.
  2. Logging Context – If bad, the bot stores: “REMEMBER '[response]' was a bad response to '[question]'” This helps the model learn from past mistakes.
  3. Defect Classification – Users specify the issue (e.g., grammatical, hallucination, off-topic) for targeted improvements.

By structuring feedback, we create a continuous learning loop, improving chatbot reliability, adaptability, and user experience—without relying on guesswork. 🚀

defect_options = [
    "grammatical", "spelling", "hallucination", "repetition", "inappropriate", "off_topic", "incompleteness", "ambiguity"
]

def chatbot():
    conv_memory = []
    trace_ids = []
    
    print("\nYou can now start chatting! Type 'exit' or 'quit' to end the chat.\n")
    
    while True:
        user_input = input("You: ")
        if user_input.lower() in ["exit", "quit"]:
            print("Ending chat.")
            break
        
        # Get model response
        response, trace_id = chat_with_deployment(user_input, conv_memory, client)
        trace_ids.append(trace_id)
        print(f"Assistant (Extracted ID: {trace_id}): {response}")
        
        # Get feedback
        feedback = input("Provide feedback (good/bad) or press Enter to skip: ").strip().lower()
        if feedback in ["good", "bad"]:
            res = orq.feedback.create(field="rating", value=[feedback], trace_id=trace_id)
            print(f"Feedback logged: {res}")
            
            # If feedback is bad, append a structured message to conversation history and request defect details
            if feedback == "bad":
                conv_memory.append({"role": "user", "content": f"REMEMBER '{response}' was a bad response to the following question: '{user_input}'"})
                
                # Request defect type
                defect_feedback = input("What was wrong with the response? (Choose from: grammatical, spelling, hallucination, repetition, inappropriate, off_topic, incompleteness, ambiguity): ").strip().lower()
                
                if defect_feedback in defect_options:
                    defect_res = orq.feedback.create(field="defects", value=[defect_feedback], trace_id=trace_id)
                    print(f"Defect feedback logged: {defect_res}")
                else:
                    print("Invalid defect type. No defect feedback logged.")

# Run chatbot
chatbot()

Next Steps

Great job! You’ve implemented a structured feedback loop for your FAQ bot, ensuring continuous learning and response improvement. To take it further:

  • Integrate interaction tracking – Link front-end actions (copied, saved, deleted, shared) to feedback logging, allowing the bot to learn without requiring explicit user input.

  • Create annotated datasets in Orq – Use feedback as a selection method to build curated datasets for evaluation. Run experiments to see if updates to prompts, models, parameters, or the knowledge base improve performance and response quality.

By embedding feedback directly into user interactions, you create a frictionless improvement cycle, making your FAQ bot more adaptive and user-friendly.

For more resources and advanced features, visit the Orq.ai Documentation.