Skip to main content

Setup Your API Key

To use OpenAI with Orq.ai, follow these steps:
  1. Navigate to Providers (in AI Studio: Model Garden > Providers, in AI Router: Providers)
  2. Find OpenAI in the list
  3. Click the Configure button next to OpenAI
  4. In the modal that opens, select Setup your own API Key
  5. Enter a name for this configuration (e.g., “OpenAI Production”)
  6. Paste your OpenAI API Key into the provided field
  7. Click Save to complete the setup
Your OpenAI API key is now configured and ready to use with Orq.ai in AI Studio or through the AI Router.

Available Models

The AI Router supports all current OpenAI models. Here are the most commonly used:
ModelContextBest For
openai/gpt-5.2-pro128KLatest, most advanced, production-ready
openai/gpt-4o128KHigh performance, widely available
openai/gpt-4o-mini128KCost-effective with strong performance
openai/gpt-4-turbo128KLong context, complex reasoning
openai/gpt-3.5-turbo4KBudget-friendly for simple tasks

Latest Generation (GPT-5.x)

  • openai/gpt-5.2-pro - Latest, most capable
  • openai/gpt-5.2-chat-latest - GPT-5.2 latest
  • openai/gpt-5.2 - GPT-5.2 base
  • openai/gpt-5.1-chat-latest - GPT-5.1 latest
  • openai/gpt-5.1 - GPT-5.1 base
  • openai/gpt-5-pro - GPT-5 advanced
  • openai/gpt-5-chat-latest - GPT-5 latest
  • openai/gpt-5-mini - GPT-5 fast
  • openai/gpt-5-nano - GPT-5 lightweight
  • openai/gpt-5 - GPT-5 base

Current Generation (GPT-4.x)

  • openai/gpt-4.1-2025-04-14 - GPT-4.1 April 2025
  • openai/gpt-4.1-mini-2025-04-14 - GPT-4.1 mini April 2025
  • openai/gpt-4.1-nano-2025-04-14 - GPT-4.1 nano April 2025
  • openai/gpt-4.1 - GPT-4.1 base
  • openai/gpt-4.1-mini - GPT-4.1 mini
  • openai/gpt-4.1-nano - GPT-4.1 nano
  • openai/gpt-4o - Latest GPT-4o, recommended
  • openai/gpt-4o-2024-11-20 - GPT-4o November 2024
  • openai/gpt-4o-2024-08-06 - GPT-4o August 2024
  • openai/gpt-4o-2024-05-13 - GPT-4o May 2024
  • openai/gpt-4o-mini - Optimized for speed and cost
  • openai/gpt-4o-mini-2024-07-18 - GPT-4o mini July 2024
  • openai/gpt-4-turbo - High-performance model
  • openai/gpt-4-turbo-2024-04-09 - GPT-4 Turbo April 2024

Legacy Models

  • openai/gpt-3.5-turbo - Cost-effective, widely used
  • openai/gpt-3.5-turbo-0125 - GPT-3.5 Turbo January 2025
  • openai/gpt-3.5-turbo-16k - Extended context

Reasoning Models (Advanced)

  • openai/o3-pro - Advanced reasoning
  • openai/o3-2025-04-16 - O3 April 2025
  • openai/o3 - O3 foundation
  • openai/o3-mini - O3 lightweight
  • openai/o3-mini-2025-01-31 - O3-mini January 2025
  • openai/o1-pro - Professional reasoning
  • openai/o1-2024-12-17 - O1 December 2024
  • openai/o1 - O1 foundation
  • openai/o4-mini-2025-04-16 - O4-mini April 2025
  • openai/o4-mini - O4-mini base
For a complete and up-to-date list of all available OpenAI models, see Supported Models.All models are available through the AI Router with the openai/ prefix.
Use openai/gpt-5.2-pro for the latest model, or openai/gpt-4o for the best balance of performance, cost, and availability.
Reasoning models (o1, o1-pro, o3, o3-pro, o4-mini) require developer mode enabled in your OpenAI account. Enable it in your account settings and accept the reasoning model terms before using these models.

Quick Start

Access OpenAI’s GPT models through the AI Router.
import OpenAI from "openai";

const openai = new OpenAI({
  apiKey: process.env.ORQ_API_KEY,
  baseURL: "https://api.orq.ai/v2/router",
});

const response = await openai.chat.completions.create({
  model: "openai/gpt-4o",
  messages: [
    {
      role: "user",
      content: "Explain quantum computing in simple terms",
    },
  ],
});

console.log(response.choices[0].message.content);

Using the AI Router

Access OpenAI’s GPT models (GPT-4, GPT-4o, GPT-4 Turbo) through the AI Router with advanced chat completions, streaming, function calling, and intelligent model routing. All OpenAI models are available with consistent formatting and automatic request logging.
OpenAI models use the provider slug format: openai/model-name. For example: openai/gpt-4o

Prerequisites

Before making requests to the AI Router, you need to configure your environment and install the SDKs if you choose to use them. Router Endpoint To use the AI Router with OpenAI models, configure your OpenAI SDK client with the base URL:
https://api.orq.ai/v2/router
Required Headers Include the following headers in all requests:
Authorization: Bearer $ORQ_API_KEY
Content-Type: application/json
Optional headers for advanced routing:
X-Orq-Provider: openai
X-Orq-Virtual-Key: your-virtual-key-name (optional)
Getting your API Key:
  1. Go to API Keys
  2. Click Create API Key and copy it
  3. Store it in your environment as ORQ_API_KEY
SDK Installation Install the OpenAI SDK for your language:
npm install openai
# or
yarn add openai
If your OpenAI code is already functionning, you only need to change the base_url and api_key to the router endpoint and ORQ_API_KEY.

Chat Completions

Send messages to OpenAI models and get intelligent responses:
const response = await openai.chat.completions.create({
  model: "openai/gpt-4o",
  messages: [
    {
      role: "system",
      content: "You are a helpful assistant that explains complex concepts simply.",
    },
    {
      role: "user",
      content: "Explain machine learning",
    },
  ],
  temperature: 0.7,
  max_tokens: 500,
});

console.log(response.choices[0].message.content);

Function Calling

OpenAI models support function calling for structured interactions:
const response = await openai.chat.completions.create({
  model: "openai/gpt-4o",
  messages: [
    {
      role: "user",
      content: "What's the weather in San Francisco?",
    },
  ],
  tools: [
    {
      type: "function",
      function: {
        name: "get_weather",
        description: "Get the current weather in a location",
        parameters: {
          type: "object",
          properties: {
            location: {
              type: "string",
              description: "The city and state, e.g. San Francisco, CA",
            },
            unit: {
              type: "string",
              enum: ["celsius", "fahrenheit"],
            },
          },
          required: ["location"],
        },
      },
    },
  ],
});

const toolCall = response.choices[0].message.tool_calls?.[0];
if (toolCall?.type === "function") {
  console.log(`Calling function: ${toolCall.function.name}`);
  console.log(`Arguments: ${toolCall.function.arguments}`);
}

Streaming

Stream responses for real-time output and improved user experience:
const stream = await openai.chat.completions.create({
  model: "openai/gpt-4o",
  messages: [
    {
      role: "user",
      content: "Write a short poem about the ocean",
    },
  ],
  stream: true,
});

for await (const chunk of stream) {
  const content = chunk.choices[0]?.delta?.content || "";
  process.stdout.write(content);
}
console.log("\n");

Using the Responses API

The Responses API combines the best of Chat Completions and Assistants APIs. Use responses.create() with the AI Router:

Endpoint

POST https://api.orq.ai/v2/router/responses

Basic Usage

Configure the OpenAI SDK to use the AI Router and call the Responses API:
curl -X POST https://api.orq.ai/v2/router/responses \
  -H "Authorization: Bearer $ORQ_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openai/gpt-4o",
    "input": "Write a three sentence bedtime story about a unicorn"
  }'

With Streaming

Stream responses for real-time output:
curl -X POST https://api.orq.ai/v2/router/responses \
  -H "Authorization: Bearer $ORQ_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openai/gpt-4o",
    "input": "Explain quantum computing",
    "stream": true
  }'

With Tools

Use function calling with the Responses API:
curl -X POST https://api.orq.ai/v2/router/responses \
  -H "Authorization: Bearer $ORQ_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openai/gpt-4o",
    "input": "What'\''s the weather in New York?",
    "tools": [
      {
        "type": "function",
        "function": {
          "name": "get_weather",
          "description": "Get current weather for a location",
          "parameters": {
            "type": "object",
            "properties": {
              "location": {
                "type": "string",
                "description": "City and state"
              }
            },
            "required": ["location"]
          }
        }
      }
    ]
  }'

Automatic Request Logging

All requests made through the AI Router are automatically logged to your dashboard. You can view:
  • Request details: Model used, tokens, latency
  • Cost tracking: Per-request and aggregate costs
  • Error monitoring: Failed requests with error messages
  • Performance metrics: Response times and throughput
No additional configuration is needed—logging happens automatically.

Troubleshooting

IssueProblemSolution
Rate LimitingToo many requests to OpenAI APIImplement exponential backoff for retries. The AI Router automatically retries failed requests with appropriate delays.
High LatencySlow response timesMonitor dashboard for model performance. Consider using gpt-3.5-turbo for latency-sensitive applications.
Invalid Request ErrorsMalformed API requestsVerify model name format (openai/model-name). Ensure required fields are present in messages array. Check that messages contain valid role values (user, assistant, system).
API ErrorsHTTP errors from OpenAIHandle errors with try/catch. Check error status codes and messages. Use the AI Router’s automatic retry mechanism.

Reference