Quick Start
Name your requests to track usage by application or service.
const response = await openai.chat.completions.create({
model: "openai/gpt-4o-mini",
messages: [{ role: "user", content: "Write a professional email" }],
name: "EmailAssistant-Production",
});
Configuration
| Parameter | Type | Required | Description |
|---|
name | string | No | The name to display on the trace. If not specified, the default system name will be used. |
For backwards compatibility, orq.name is also supported but deprecated. Use top-level name for new implementations.
Default behavior: If no name provided, system uses default identifier.
Naming Conventions
Recommended patterns
// Service-Environment
"UserAPI-Production";
"ChatBot-Development";
// Team-Service-Feature
"Platform-Auth-OAuth";
"ML-Recommendations-v2";
// Application-Version
"MobileApp-v3.1";
"WebPortal-v2.0";
Best practices
- Use consistent patterns across team
- Include environment (dev/staging/prod)
- Avoid timestamps or dynamic values
- Keep names under 50 characters
- Use alphanumeric and hyphens only
Use Cases
| Scenario | Naming Strategy | Example |
|---|
| Microservices | Service-based naming | user-service, payment-api |
| Multi-tenant | Tenant identification | tenant-123, enterprise-client |
| A/B testing | Variant tracking | experiment-A, control-group |
| Feature flags | Feature identification | new-ui-beta, legacy-flow |
Code examples
curl -X POST https://api.orq.ai/v2/router/chat/completions \
-H "Authorization: Bearer $ORQ_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "openai/gpt-4o-mini",
"messages": [
{
"role": "user",
"content": "Help me write a professional email to follow up on a job interview"
}
],
"name": "CustomerSupportBot-Production"
}'
Environment Management
Configuration by environment
const getTrackingName = (service, environment) => {
return `${service}-${environment}`;
};
// Usage
const trackingConfig = {
name: getTrackingName("ChatBot", process.env.NODE_ENV),
// Results in: "ChatBot-development", "ChatBot-production"
};
Environment-specific examples
// Development
name: "UserAPI-Dev"
// Staging
name: "UserAPI-Staging"
// Production
name: "UserAPI-Prod"
Orq.ai Studio Integration
Filtering by application
- View requests by specific app/service
- Compare performance across applications
- Track costs per application
- Monitor error rates by service
Metrics available
The following are available metrics available for App Tracking:
- Request volume per application
- Response times by service
- Cost allocation by project
- Error patterns by environment
Advanced Patterns
Dynamic naming
const generateTrackingName = (userId, feature) => {
// For multi-tenant scenarios
return `tenant-${userId}-${feature}`;
};
// Usage
name: generateTrackingName(user.id, "chat-assistant")
Feature flag integration
const getFeatureName = (featureFlags) => {
const activeFeatures = Object.keys(featureFlags)
.filter((key) => featureFlags[key])
.join("-");
return `app-${activeFeatures}`;
};
Version tracking
// Package.json version
const packageVersion = require("./package.json").version;
name: `MyApp-v${packageVersion}`
Troubleshooting
**Names not appearing in dashboard
- Check name follows alphanumeric + hyphens pattern
- Verify requests are being sent successfully
- Ensure name is under character limit (50 chars)
**Fragmented tracking data
- Standardize naming conventions across team
- Use environment variables for consistency
- Implement a centralized naming function
**Too many unique names
- Avoid timestamps or random values
- Limit to ~50 unique names per account
- Use hierarchical naming instead of flat structure
Monitoring
The following metrics are available for monitoring.
const appMetrics = {
requestsByApp: {}, // Volume per application
costsByApp: {}, // Spending per application
latencyByApp: {}, // Performance per application
errorsByApp: {}, // Error rates per application
activeApps: new Set(), // Unique applications
};
Analytics queries
The following queries can be answered using the above metrics.
- Which applications use AI most?
- What’s the cost per application?
- Which services have highest error rates?
- How does performance vary by application?
Best Practices
Naming standards
- Document naming conventions for your team
- Use consistent separators (hyphens recommended)
- Include environment in name for clarity
- Avoid special characters or spaces
Example Implementation
// Centralized tracking configuration
class OrqConfig {
static getName(service, environment = process.env.NODE_ENV) {
return `${service}-${environment}`;
}
static getConfig(service) {
return {
name: this.getName(service),
};
}
}
// Usage
const orqConfig = OrqConfig.getConfig("ChatBot");
To make sure teams within an Engineering Organization align on App Tracking principles:
- Maintain a list of approved application names
- Use code reviews to enforce naming standards
- Set up monitoring alerts for new/unexpected names
- Regular cleanup of unused tracking names
Limitations
- Name constraints: Alphanumeric characters and hyphens only
- Length limits: Maximum 50 characters per name
- Storage impact: Many unique names increase metadata storage
- Query performance: Large numbers of unique names may slow filtering
- No retroactive changes: Historical traces keep original names
Integration Examples
With external monitoring systems
// Export metrics by application
const exportMetrics = (trackingName, responseTime, cost) => {
prometheus.histogram("ai_request_duration", responseTime, {
app: trackingName,
});
prometheus.counter("ai_request_cost", cost, { app: trackingName });
};
With logging
logger.info("AI request completed", {
trackingName: "ChatBot-Prod",
responseTime: 1250,
model: "gpt-4o",
success: true,
});