Using Prompts
Quick Start
Reference pre-created Prompts instead of inline messages.
// Instead of inline messages
const response = await openai.chat.completions.create({
model: "openai/gpt-4o",
orq: {
prompt: {
id: "prompt_01ARZ3NDEKTSV4RRFFQ69G5FAV",
version: "latest",
},
},
});
Prerequisites: Prompts must be created in Orq.ai Studio before use.
Configuration
Parameter | Type | Required | Description |
---|---|---|---|
id | string | Yes | Unique prompt identifier from Orq.ai |
version | "latest" | Yes | Currently only "latest" supported |
Important: Cannot use both prompt
config and messages
in same request.
Workflow
- Create a Prompt in the Orq.ai Studio.
- Copy prompt ID from the Studio..
- Reference in code using prompt config.
- Deploy safely knowing prompts are version-controlled.
// Step 3: Reference in code
{
model: "openai/gpt-4o",
orq: {
prompt: {
id: "prompt_01ARZ3NDEKTSV4RRFFQ69G5FAV",
version: "latest"
}
}
}
Version Control Benefits
Production safety:
- Changes to prompts don't immediately break production
- Test prompt changes before releasing
- Rollback capability if issues arise
Development workflow:
// Development: Always use latest
const devConfig = {
prompt: {
id: "prompt_123",
version: "latest", // Gets newest changes
},
};
// Production: Pin to specific version (coming soon)
const prodConfig = {
prompt: {
id: "prompt_123",
version: "v1.2.0", // Stable version
},
};
Code examples
curl -X POST https://api.orq.ai/v2/proxy/chat/completions \
-H "Authorization: Bearer $ORQ_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "openai/gpt-4o",
"orq": {
"prompt": {
"id": "prompt_01ARZ3NDEKTSV4RRFFQ69G5FAV",
"version": "latest"
}
}
}'
from openai import OpenAI
import os
openai = OpenAI(
api_key=os.environ.get("ORQ_API_KEY"),
base_url="https://api.orq.ai/v2/proxy"
)
response = openai.chat.completions.create(
model="openai/gpt-4o",
extra_body={
"orq": {
"prompt": {
"id": "prompt_01ARZ3NDEKTSV4RRFFQ69G5FAV",
"version": "latest"
}
}
}
)
import OpenAI from "openai";
const openai = new OpenAI({
apiKey: process.env.ORQ_API_KEY,
baseURL: "https://api.orq.ai/v2/proxy",
});
const response = await openai.chat.completions.create({
model: "openai/gpt-4o",
orq: {
prompt: {
id: "prompt_01ARZ3NDEKTSV4RRFFQ69G5FAV",
version: "latest",
},
},
});
Error Handling
try {
const response = await openai.chat.completions.create({
model: "openai/gpt-4o",
orq: {
prompt: {
id: "prompt_123",
version: "latest",
},
},
});
} catch (error) {
if (error.message.includes("prompt not found")) {
// Fallback to inline messages
console.warn("Prompt not found, using fallback");
return await openai.chat.completions.create({
model: "openai/gpt-4o",
messages: [
{
role: "user",
content: "Default prompt content here",
},
],
});
}
throw error;
}
Best Practices
Prompt management:
- Use descriptive names for prompts in the Studio.
- Document prompt purpose and usage
- Test prompt changes in staging first
- Maintain prompt versioning strategy
Environment strategy:
const getPromptConfig = (environment) => {
const prompts = {
development: "prompt_dev_123", // Latest experimental
staging: "prompt_staging_123", // Stable testing version
production: "prompt_prod_123", // Production-ready version
};
return {
prompt: {
id: prompts[environment],
version: "latest",
},
};
};
Fallback strategy:
class PromptManager {
static async makeRequest(promptId, fallbackMessages) {
try {
return await openai.chat.completions.create({
model: "openai/gpt-4o",
orq: {
prompt: { id: promptId, version: "latest" },
},
});
} catch (error) {
if (error.message.includes("prompt not found")) {
// Graceful fallback
return await openai.chat.completions.create({
model: "openai/gpt-4o",
messages: fallbackMessages,
});
}
throw error;
}
}
}
Troubleshooting
"Prompt not found" errors:
- Verify prompt ID is correct
- Check prompt exists in your Orq.ai account
- Ensure prompt is published/active
- Confirm API Key has access to the prompt
Outdated prompt content:
- Remember "latest" version may have delays
- Check prompt was saved in dashboard
- Verify you're using correct environment
Performance issues:
- Prompt resolution adds minimal latency (~10ms)
- Cache prompt responses like regular requests
- Consider local caching for frequently used prompts
Monitoring
Track prompt usage:
const promptMetrics = {
promptsUsed: new Set(), // Unique prompts referenced
promptErrors: {}, // Errors by prompt ID
promptPerformance: {}, // Response times by prompt
fallbackRate: 0, // How often fallbacks are used
};
Useful metrics:
- Which prompts are used most frequently?
- What's the error rate per prompt?
- How often do we fall back to inline messages?
- What's the performance impact of prompt resolution?
Advanced Usage
Dynamic prompt selection:
const getPromptForUser = (userType, feature) => {
const promptMap = {
"premium-user": "prompt_premium_123",
"trial-user": "prompt_trial_123",
enterprise: "prompt_enterprise_123",
};
return promptMap[userType] || "prompt_default_123";
};
// Usage
const promptId = getPromptForUser(user.type, "chat-assistant");
A/B testing with prompts:
const getPromptForExperiment = (userId) => {
const variants = [
"prompt_variant_a_123", // Control
"prompt_variant_b_123", // Experiment
];
// Simple hash-based assignment
const hash = userId.charCodeAt(0) % 2;
return variants[hash];
};
Prompt template variables (when supported):
// Future feature - variables in prompts
{
orq: {
prompt: {
id: "prompt_123",
version: "latest",
variables: {
user_name: "John",
context: "customer_support"
}
}
}
}
Limitations
- Pre-creation required: Prompts must exist in Orq.ai before use
- Version limitations: Only "latest" version currently supported
- Exclusive usage: Cannot mix prompt config with inline messages
- Network dependency: Requires connection to Orq.ai for prompt resolution
- Account scoped: Prompts cannot be shared across different Orq.ai accounts
Migration from Inline Messages
Before (inline messages):
const response = await openai.chat.completions.create({
model: "openai/gpt-4o",
messages: [
{
role: "system",
content: "You are a helpful customer service assistant...",
},
{
role: "user",
content: userInput,
},
],
});
After (using prompts):
// 1. Create prompt in Orq.ai dashboard with system message
// 2. Reference prompt in code
const response = await openai.chat.completions.create({
model: "openai/gpt-4o",
orq: {
prompt: {
id: "prompt_customer_service_123",
version: "latest",
},
},
// Note: user input handled in prompt template
});
Team Collaboration
Roles and responsibilities:
- Developers: Implement prompt references in code
- Product managers: Create and refine prompts in dashboard
- QA: Test prompt changes before production
- DevOps: Manage prompt deployments and rollbacks
Communication workflow:
- Product team updates prompt in dashboard
- Notifies dev team of changes
- Dev team tests with "latest" version
- Deploy to production when ready
- Monitor performance and rollback if needed
Updated about 3 hours ago