Setup Your API Key
To use OpenAI with Orq.ai, follow these steps:- Navigate to Providers (in AI Studio: Model Garden > Providers, in AI Router: Providers)
- Find OpenAI in the list
- Click the Configure button next to OpenAI
- In the modal that opens, select Setup your own API Key
- Enter a name for this configuration (e.g., “OpenAI Production”)
- Paste your OpenAI API Key into the provided field
- Click Save to complete the setup
Available Models
The AI Router supports all current OpenAI models. Here are the most commonly used:Recommended Models
| Model | Context | Best For |
|---|---|---|
openai/gpt-5.2-pro | 128K | Latest, most advanced, production-ready |
openai/gpt-4o | 128K | High performance, widely available |
openai/gpt-4o-mini | 128K | Cost-effective with strong performance |
openai/gpt-4-turbo | 128K | Long context, complex reasoning |
openai/gpt-3.5-turbo | 4K | Budget-friendly for simple tasks |
View all Supported models
View all Supported models
Latest Generation (GPT-5.x)
openai/gpt-5.2-pro- Latest, most capableopenai/gpt-5.2-chat-latest- GPT-5.2 latestopenai/gpt-5.2- GPT-5.2 baseopenai/gpt-5.1-chat-latest- GPT-5.1 latestopenai/gpt-5.1- GPT-5.1 baseopenai/gpt-5-pro- GPT-5 advancedopenai/gpt-5-chat-latest- GPT-5 latestopenai/gpt-5-mini- GPT-5 fastopenai/gpt-5-nano- GPT-5 lightweightopenai/gpt-5- GPT-5 base
Current Generation (GPT-4.x)
openai/gpt-4.1-2025-04-14- GPT-4.1 April 2025openai/gpt-4.1-mini-2025-04-14- GPT-4.1 mini April 2025openai/gpt-4.1-nano-2025-04-14- GPT-4.1 nano April 2025openai/gpt-4.1- GPT-4.1 baseopenai/gpt-4.1-mini- GPT-4.1 miniopenai/gpt-4.1-nano- GPT-4.1 nanoopenai/gpt-4o- Latest GPT-4o, recommendedopenai/gpt-4o-2024-11-20- GPT-4o November 2024openai/gpt-4o-2024-08-06- GPT-4o August 2024openai/gpt-4o-2024-05-13- GPT-4o May 2024openai/gpt-4o-mini- Optimized for speed and costopenai/gpt-4o-mini-2024-07-18- GPT-4o mini July 2024openai/gpt-4-turbo- High-performance modelopenai/gpt-4-turbo-2024-04-09- GPT-4 Turbo April 2024
Legacy Models
openai/gpt-3.5-turbo- Cost-effective, widely usedopenai/gpt-3.5-turbo-0125- GPT-3.5 Turbo January 2025openai/gpt-3.5-turbo-16k- Extended context
Reasoning Models (Advanced)
openai/o3-pro- Advanced reasoningopenai/o3-2025-04-16- O3 April 2025openai/o3- O3 foundationopenai/o3-mini- O3 lightweightopenai/o3-mini-2025-01-31- O3-mini January 2025openai/o1-pro- Professional reasoningopenai/o1-2024-12-17- O1 December 2024openai/o1- O1 foundationopenai/o4-mini-2025-04-16- O4-mini April 2025openai/o4-mini- O4-mini base
openai/ prefix.Reasoning models (
o1, o1-pro, o3, o3-pro, o4-mini) require developer mode enabled in your OpenAI account. Enable it in your account settings and accept the reasoning model terms before using these models.Quick Start
Access OpenAI’s GPT models through the AI Router.Using the AI Router
Access OpenAI’s GPT models (GPT-4, GPT-4o, GPT-4 Turbo) through the AI Router with advanced chat completions, streaming, function calling, and intelligent model routing. All OpenAI models are available with consistent formatting and automatic request logging.OpenAI models use the provider slug format:
openai/model-name. For example: openai/gpt-4oPrerequisites
Before making requests to the AI Router, you need to configure your environment and install the SDKs if you choose to use them. Router Endpoint To use the AI Router with OpenAI models, configure your OpenAI SDK client with the base URL:- Go to API Keys
- Click Create API Key and copy it
- Store it in your environment as
ORQ_API_KEY
Chat Completions
Send messages to OpenAI models and get intelligent responses:Function Calling
OpenAI models support function calling for structured interactions:Streaming
Stream responses for real-time output and improved user experience:Using the Responses API
The Responses API combines the best of Chat Completions and Assistants APIs. Useresponses.create() with the AI Router:
Endpoint
Basic Usage
Configure the OpenAI SDK to use the AI Router and call the Responses API:With Streaming
Stream responses for real-time output:With Tools
Use function calling with the Responses API:Automatic Request Logging
All requests made through the AI Router are automatically logged to your dashboard. You can view:- Request details: Model used, tokens, latency
- Cost tracking: Per-request and aggregate costs
- Error monitoring: Failed requests with error messages
- Performance metrics: Response times and throughput
Troubleshooting
| Issue | Problem | Solution |
|---|---|---|
| Rate Limiting | Too many requests to OpenAI API | Implement exponential backoff for retries. The AI Router automatically retries failed requests with appropriate delays. |
| High Latency | Slow response times | Monitor dashboard for model performance. Consider using gpt-3.5-turbo for latency-sensitive applications. |
| Invalid Request Errors | Malformed API requests | Verify model name format (openai/model-name). Ensure required fields are present in messages array. Check that messages contain valid role values (user, assistant, system). |
| API Errors | HTTP errors from OpenAI | Handle errors with try/catch. Check error status codes and messages. Use the AI Router’s automatic retry mechanism. |