Skip to main content

Setup Your API Key

To use Google AI with Orq.ai, follow these steps:
  1. Navigate to Providers (in AI Studio: Model Garden > Providers, in AI Router: Providers)
  2. Find Google AI in the list
  3. Click the Configure button next to Google AI
  4. In the modal that opens, select Setup your own API Key
  5. Enter a name for this configuration (e.g., “Google AI Production”)
  6. Paste your Google AI API Key into the provided field
  7. Click Save to complete the setup
Your Google AI API key is now configured and ready to use with Orq.ai in AI Studio or through the AI Router.

Available Models

The AI Router supports all current Google Gemini models. Here are the most commonly used:
ModelContextBest For
google-ai/gemini-3-pro-preview1MLatest preview, most advanced (experimental)
google-ai/gemini-2.5-pro1MLatest stable, most capable
google-ai/gemini-2.5-flash1MFast, balanced performance
google-ai/gemini-2.5-flash-lite1MLightweight, cost-effective

Latest Generation (Gemini 3 - Preview)

  • google-ai/gemini-3-pro-preview - Latest preview, most advanced
  • google-ai/gemini-3-flash-preview - Fast preview model

Current Generation (Gemini 2.5)

  • google-ai/gemini-2.5-pro - Latest stable, most capable
  • google-ai/gemini-2.5-flash - Fast, balanced performance
  • google-ai/gemini-2.5-flash-lite - Lightweight, cost-effective
  • google-ai/gemini-2.5-flash-preview-09-2025 - Flash preview
  • google-ai/gemini-2.5-flash-lite-preview-09-2025 - Lite preview

Stable Generation (Gemini 2.0)

  • google-ai/gemini-2.0-flash - Stable, reliable
  • google-ai/gemini-2.0-flash-001 - Specific version
  • google-ai/gemini-2.0-flash-lite - Lightweight variant
  • google-ai/gemini-2.0-flash-lite-001 - Lite specific version
  • google-ai/gemini-2.0-flash-lite-preview-02-05 - Preview version

Latest Versions

  • google-ai/gemini-flash-latest - Latest flash model
  • google-ai/gemini-flash-lite-latest - Latest lite model
For a complete and up-to-date list of all available Google Gemini models, see Supported Models.All models are available through the AI Router with the google-ai/ prefix.
Use google-ai/gemini-3-pro-preview for the latest preview, google-ai/gemini-2.5-pro for the latest stable model, or google-ai/gemini-2.5-flash for the best balance of performance and cost.

Quick Start

Access Google Gemini models through the AI Router.
import { GoogleGenerativeAI } from "@google/generative-ai";

const genAI = new GoogleGenerativeAI({
  apiKey: process.env.ORQ_API_KEY,
  baseURL: "https://api.orq.ai/v2/router",
});

const model = genAI.getGenerativeModel({
  model: "google-ai/gemini-2.5-pro",
});

const response = await model.generateContent(
  "Explain quantum computing in simple terms"
);

console.log(response.response.text());

Using the AI Router

Access Google Gemini models through the AI Router with advanced chat completions, streaming, and intelligent model routing. All Gemini models are available with consistent formatting and automatic request logging.
Google AI models use the provider slug format: google-ai/model-name. For example: google-ai/gemini-2.5-pro

Prerequisites

Before making requests to the AI Router, you need to configure your environment and install the SDKs if you choose to use them. Endpoint
POST https://api.orq.ai/v2/router/chat/completions
Required Headers Include the following headers in all requests:
Authorization: Bearer $ORQ_API_KEY
Content-Type: application/json
Getting your API Key:
  1. Go to API Keys
  2. Click Create API Key and copy it
  3. Store it in your environment as ORQ_API_KEY
SDK Installation Install the OpenAI SDK for your language (compatible with Google Gemini models):
npm install @google/generative-ai
# or
yarn add @google/generative-ai

Basic Usage

If your OpenAI code is already functionning, you only need to change the base_url and api_key to the router endpoint and ORQ_API_KEY.

Chat Completions

Send messages to Gemini models and get intelligent responses:
const response = await model.generateContent({
  contents: [
    {
      role: "user",
      parts: [
        {
          text: "Explain machine learning",
        },
      ],
    },
  ],
});

console.log(response.response.text());

Streaming

Stream responses for real-time output and improved user experience:
const stream = await model.generateContentStream(
  "Write a short poem about the ocean"
);

for await (const chunk of stream.stream) {
  const text = chunk.candidates[0]?.content?.parts[0]?.text;
  if (text) {
    process.stdout.write(text);
  }
}

Reference