Overview
The Claude Web App (claude.ai) supports Model Context Protocol integrations, allowing you to access Orq.ai features directly in your Claude conversations without installing any software.Prerequisites
- Claude Pro or Claude Team subscription
- Active Orq.ai account
- Orq.ai API key
Installation
Add Custom Connector
Requirements: Claude Pro, Max, Team, or Enterprise plan
- Navigate to claude.ai/settings/connectors
- Click Add custom connector at the bottom of the Connectors section
- Configure the Orq MCP:
- Name:
Orq.ai - URL:
https://my.orq.ai/v2/mcp
- Name:
- Click Advanced settings to add authentication:
- Add a custom header:
Authorization: Bearer YOUR_ORQ_API_KEY
- Add a custom header:
- Replace
YOUR_ORQ_API_KEYwith your actual API key - Click Add to save the connector
The connector will be available in all your Claude.ai conversations after adding it.
Verify Installation
Start a new conversation and ask:Available Commands
You can ask Claude to perform these operations using natural language:Analytics
Analytics
Get analytics overview for my workspaceShow me workspace metrics for the last 7 daysQuery analytics filtered by deployment ID
Datasets
Datasets
Create a dataset called "customer-queries"List all datapoints in dataset [dataset-id]Add datapoints to dataset [dataset-id]Update datapoint [datapoint-id]Delete dataset [dataset-id]
Experiments
Experiments
Create an experiment from dataset [dataset-id]List all experiment runsExport experiment run [run-id] as CSVRun experiment and auto-evaluate results
Evaluators
Evaluators
Create an LLM-as-a-Judge evaluator for toneCreate a Python evaluator to check response lengthAdd evaluator to experiment [experiment-id]
Traces
Traces
List traces from the last 24 hoursShow me traces with errorsGet span details for trace [trace-id]Find the slowest traces from today
Models & Search
Models & Search
List all available AI modelsSearch for datasets named "customer"Find experiments in project [project-id]List registry keys for filtering traces
Usage Examples
Create Experiments
- Use
search_entitiesto find the “customer-queries” dataset - Use
create_experimentwith the name “Model Comparison Test” and auto-run enabled - Configure two task columns (one for GPT-5.2, one for Claude Sonnet 4.6)
- Execute both models against the dataset automatically via the auto-run option
- Provide a summary of the results with evaluation metrics
Analyze Traces
- Calculate the time range for the last 24 hours
- Use
list_traceswith error status filter - Analyze the trace data
- Provide error count and types, affected deployments, time distribution, and suggested fixes based on error patterns
Generate Synthetic Datasets
- Generate 100 realistic customer support conversation examples (questions and expected responses)
- Use
create_datasetto create a new dataset named “Support Training” - Use
create_datapointsto add all 100 conversations to the dataset - Confirm creation with the dataset ID and sample of generated data
Performance Analysis
- Use
query_analyticswith a 7-day time range - Analyze average latency changes over the week
- Review token usage patterns and cost trends
- Examine error rate fluctuations
- Compare performance across different models
- Provide a summary report with insights on whether performance has improved or decreased
Troubleshooting
Integration Not Working
Integration Not Working
- Verify the MCP URL is correct
- Check your API key is valid and has permissions
- Try removing and re-adding the integration
- Clear browser cache and cookies
Slow Responses
Slow Responses
MCP operations over HTTP can take a few seconds:
- Be patient with large dataset operations
- Break complex workflows into smaller steps
- Check Orq.ai service status
Tool Not Found
Tool Not Found
- Verify the MCP integration is active in Settings
- Try rephrasing your request
- Check the MCP tools list