Overview
Orq Skills are pre-built, reusable workflows from the orq-ai/orq-skills repository. They come in two forms:- Skills: multi-step workflows that require reasoning, such as building an agent, running an experiment, or analyzing trace failures.
- Commands: quick slash-command actions for immediate results, such as listing traces or showing analytics.
Prerequisites
- An active orq.ai account
- An API key
- The Orq MCP server connected to your assistant (see MCP Quickstart)
Installation
Choose the option that matches your assistant:Use one path only. The Claude Code plugin install includes the MCP server — running both paths will install the MCP twice. Commands (
/orq:quickstart, /orq:workspace, and others) and agents are only available with the Claude Code plugin.Verify
Claude Code: Run the interactive onboarding command to confirm everything is working:Commands
Quick-action slash commands available in Claude Code. Use/orq:<command> to trigger them.
| Command | Description | Usage |
|---|---|---|
| quickstart | Interactive onboarding: credentials, MCP setup, skills tour | /orq:quickstart |
| workspace | Workspace overview: Agents, Deployments, Prompts, Datasets, Experiments | /orq:workspace [section] |
| traces | Query and summarize Traces with filters | /orq:traces [--deployment name] [--status error] [--last 24h] |
| models | List available AI models by provider | /orq:models [search-term] |
| analytics | Usage Analytics: requests, cost, tokens, errors | /orq:analytics [--last 24h] [--group-by model] |
Skills
Skills are triggered by describing what you need. The assistant picks the right skill automatically.| Skill | Description | Docs |
|---|---|---|
| build-agent | Design, create, and configure an Orq.ai Agent with tools, instructions, Knowledge Bases, and Memory | SKILL.md |
| build-evaluator | Create validated LLM-as-a-Judge Evaluators following evaluation best practices | SKILL.md |
| analyze-trace-failures | Read production Traces, identify what is failing, build failure taxonomies, and categorize issues | SKILL.md |
| run-experiment | Create and run Orq.ai Experiments: compare configurations with specialized agent, conversation, and RAG evaluation | SKILL.md |
| generate-synthetic-dataset | Generate and curate evaluation Datasets: structured generation, quick from description, expansion, and dataset maintenance | SKILL.md |
| optimize-prompt | Analyze and optimize system Prompts using a structured prompting guidelines framework | SKILL.md |
| setup-observability | Instrument LLM applications with orq.ai tracing. Covers AI Router (zero-code traces) and OpenTelemetry/OpenInference. Guides from framework detection through baseline verification to trace enrichment | SKILL.md |
| compare-agents | Run cross-framework agent comparisons — compare any combination of orq.ai, LangGraph, CrewAI, or OpenAI Agents SDK agents using evaluatorq | SKILL.md |
Example workflows
Instrument an existing app
Build a new agent
Debug production issues
Improve an existing agent
Improve an existing prompt
Resources
orq-ai/orq-skills
Source repository for all skills, commands, and agents