Skip to main content

   Common Architectures

Proven implementation patterns for building AI applications, from simple LLM integrations to complex multi-agent systems.

Simple Deployment

The most straightforward way to integrate LLM calls through Orq.ai as an AI Gateway.

Chatbot Architecture

Build conversational AI with memory, context awareness, and intelligent escalation.

Simple RAG

Ground LLM responses in your knowledge base for accurate, context-aware answers.

Advanced RAG

Add reranking, hybrid search, and query optimization to your RAG system.

AI Agent

Build autonomous agents with tool calling, memory, and multi-agent coordination.

Agents Framework Guide

Integrate Orq.ai with LangGraph, CrewAI, and AutoGen for observability and control.

AI Gateway vs Config Management

Understand the differences between the two core integration patterns.

   Chatbots & AI Apps

End-to-end guides for building production chatbots, conversational AI, and multi-agent applications.

Customer Support Chat

Build a production-ready chatbot with streaming, fallbacks, caching, and RAG.

Chat History

Maintain conversation history across messages for stateful LLM interactions.

Multilingual FAQ Bot

Build a multilingual FAQ chatbot with RAG and dynamic language routing.

Intent Classification

Categorize user queries for chatbots, support, and task automation.

Lovable Integration

Build RAG-powered FAQ bots using prompt-based development without a backend.

Multi-Agent HR System

Build a multi-agent system with specialized agents, memory, and knowledge bases.

   Data & Extraction

Use AI to extract structured data from unstructured documents, images, and natural language inputs.

PDF Extraction

Extract structured data from PDF invoices using vision models.

Receipt Extraction

Process receipt images into structured JSON with vendor names, amounts, and dates.

Text-to-SQL

Transform natural language into SQL queries for non-technical database access.

   Evaluation & Safety

Test, evaluate, and red-team your LLM deployments to ensure quality, reliability, and security.

Parallel Evaluations

Run evaluations in parallel at scale using evaluatorq.

Red Teaming

Probe LLM deployments and agents for security vulnerabilities with evaluatorq.

   Integrations & Tooling

Connect Orq.ai with your existing tools, workflows, and third-party infrastructure.

Chaining Deployments

Chain multiple LLM deployments and run evaluators across multi-step workflows.

n8n Integration

Use Orq.ai deployments and routing inside n8n workflow automation pipelines.

Prompt Manager

Fetch deployment configurations at runtime while keeping control over your infrastructure.

Capturing Feedback

Implement structured user feedback to improve LLM responses over time.

Third-Party Vector DBs

Connect Pinecone or other vector databases to Orq.ai for custom RAG pipelines.

   Learn

Reference guides and conceptual explainers for LLM concepts, prompt engineering, and platform features.

LLM Glossary

Definitions for 200+ LLM, LLMOps, and prompt engineering terms.

Prompt Engineering Guide

Best practices for LLM optimization, structured prompts, and consistent outputs.

Prompt Templating

Complete reference for Jinja and Mustache templates in deployments and experiments.

Understanding Control Tower

Non-technical guide to monitoring AI agents, tracking costs, and staying in control.