For the OpenAI API specification, see the API Reference
Drop-in Integration (No Code Changes)
- Keep your existing OpenAI SDK or HTTP integration.
- Set the base URL to
https://api.orq.ai/v2/proxy - Use your Orq.ai API Key in the Authorization header.
- Call the same endpoints and payloads you already use with OpenAI.
Base URL
OpenAI-compatible endpoint:Authentication
Authenticate with your Orq.ai API key via theAuthorization: Bearer $ORQ_API_KEY header.
To learn more about Orq API Key, see API Key.
Authorization: Bearer $ORQ_API_KEYContent-Type: application/json
Supported Endpoints
Schema, parameters, and response formats match the OpenAI API.(WIP) GET /models- List available models(WIP) GET /models/{model}- Get model detailsPOST /chat/completions- Chat completions (supports streaming, images, files, and tool calls)POST /completions- Text completionsPOST /embeddings- Vector embeddingsPOST /images/generations- Image generationPOST /images/edits- Image editingPOST /images/variations- Image variationsPOST /moderations- Text moderationPOST /rerank- Rerank resultsPOST /speech- Text-to-speechPOST /audio/transcriptions- Transcribe audio into the input languagePOST /audio/translations- Translate audio into the input languagePOST /responses- Create a model response with built-in tools
Models
Use the model field exactly as you would with OpenAI, substituting the ID of any model available in the AI Router Settings.Error Handling & Compatibility Notes
- HTTP status codes and error structures follow OpenAI’s conventions.
- Streaming, function/tool calls, and multimodal inputs (images/files) are supported on /chat/completions.
- The AI Gateway is versioned under
/v2/proxy. Ensure clients target this path.