Skip to main content

AI Router

Overview

The Vercel AI SDK is a TypeScript toolkit for building AI-powered applications with streaming, structured outputs, and multi-model support. By connecting it to Orq.ai’s AI Router via the @orq-ai/vercel-provider, you get access to 300+ models with a single provider setup.

Key Benefits

Orq.ai’s AI Router enhances your Vercel AI applications with:

Complete Observability

Track every generation, stream, and structured output with detailed traces

Built-in Reliability

Automatic fallbacks, retries, and load balancing for production resilience

Cost Optimization

Real-time cost tracking and spend management across all your AI operations

Multi-Provider Access

Access 300+ LLMs and 20+ providers through a single, unified integration

Prerequisites

Before integrating Vercel AI with Orq.ai, ensure you have:
  • An Orq.ai account and API Key
  • Node.js 18 or higher
To setup your API key, see API keys & Endpoints.

Installation

npm install @orq-ai/vercel-provider ai

Configuration

Configure the Orq.ai provider with your API key:
TypeScript
import { createOrqAiProvider } from "@orq-ai/vercel-provider";

const orq = createOrqAiProvider({
  apiKey: process.env.ORQ_API_KEY,
});
base_url: https://api.orq.ai/v2/router

Text Generation

TypeScript
import { createOrqAiProvider } from "@orq-ai/vercel-provider";
import { generateText } from "ai";

const orq = createOrqAiProvider({
  apiKey: process.env.ORQ_API_KEY,
});

const { text } = await generateText({
  model: orq("openai/gpt-4o"),
  prompt: "Write a haiku about programming",
});

console.log(text);

Streaming Responses

TypeScript
import { createOrqAiProvider } from "@orq-ai/vercel-provider";
import { streamText } from "ai";

const orq = createOrqAiProvider({
  apiKey: process.env.ORQ_API_KEY,
});

const { textStream } = await streamText({
  model: orq("openai/gpt-4o"),
  messages: [
    { role: "system", content: "You are a helpful assistant." },
    { role: "user", content: "Explain quantum computing in two sentences." },
  ],
});

for await (const chunk of textStream) {
  process.stdout.write(chunk);
}

Structured Output

Use a JSON system prompt and parse the response:
TypeScript
import { createOrqAiProvider } from "@orq-ai/vercel-provider";
import { generateText } from "ai";

const orq = createOrqAiProvider({
  apiKey: process.env.ORQ_API_KEY,
});

const { text } = await generateText({
  model: orq("openai/gpt-4o"),
  messages: [
    {
      role: "system",
      content: "You are a data assistant. Always respond with valid JSON only, no markdown.",
    },
    {
      role: "user",
      content: "Generate information about France with fields: name, capital, population, languages.",
    },
  ],
});

const country = JSON.parse(text);
console.log(country);

Model Selection

With Orq.ai, you can use any supported model from 20+ providers:
TypeScript
import { createOrqAiProvider } from "@orq-ai/vercel-provider";
import { generateText } from "ai";

const orq = createOrqAiProvider({
  apiKey: process.env.ORQ_API_KEY,
});

// Use Claude
const claudeResult = await generateText({
  model: orq("anthropic/claude-sonnet-4-5-20250929"),
  prompt: "What is the largest planet?",
});

// Use Gemini
const geminiResult = await generateText({
  model: orq("google/gemini-2.5-flash"),
  prompt: "What is the largest planet?",
});

// Use Groq
const groqResult = await generateText({
  model: orq("groq/llama-3.3-70b-versatile"),
  prompt: "What is the largest planet?",
});

Observability

Getting Started

The Vercel AI SDK provides powerful React hooks and utilities for building AI-powered applications with built-in OpenTelemetry support. The SDK includes experimental telemetry features that automatically capture detailed traces of AI operations, making integration with Orq.ai straightforward for comprehensive observability.

Prerequisites

Before you begin, ensure you have:
  • An Orq.ai account and an API Key.
  • Vercel AI SDK v3.1+ (with telemetry support).
  • Node.js 18+ and TypeScript support.
  • API keys for your LLM providers (OpenAI, Anthropic, etc.).

Install Dependencies

# Core Vercel AI SDK with latest version
npm install ai@latest

# OpenTelemetry packages
npm install @opentelemetry/api @opentelemetry/sdk-node @opentelemetry/exporter-trace-otlp-http
npm install @opentelemetry/instrumentation @opentelemetry/resources
npm install @opentelemetry/semantic-conventions

# Provider SDKs (choose what you need)
npm install @ai-sdk/openai @ai-sdk/anthropic @ai-sdk/google

# Optional: For React applications
npm install @ai-sdk/react

Configure Orq.ai

Set up your environment variables to connect to Orq.ai’s OpenTelemetry collector: Unix/Linux/macOS:
export OTEL_EXPORTER_OTLP_ENDPOINT="https://api.orq.ai/v2/otel"
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer <ORQ_API_KEY>"
export OTEL_RESOURCE_ATTRIBUTES="service.name=vercel-ai-app,service.version=1.0.0"
export OPENAI_API_KEY="<YOUR_OPENAI_API_KEY>"
Windows (PowerShell):
$env:OTEL_EXPORTER_OTLP_ENDPOINT = "https://api.orq.ai/v2/otel"
$env:OTEL_EXPORTER_OTLP_HEADERS = "Authorization=Bearer <ORQ_API_KEY>"
$env:OTEL_RESOURCE_ATTRIBUTES = "service.name=vercel-ai-app,service.version=1.0.0"
$env:OPENAI_API_KEY = "<YOUR_OPENAI_API_KEY>"
Using .env file:
OTEL_EXPORTER_OTLP_ENDPOINT=https://api.orq.ai/v2/otel
OTEL_EXPORTER_OTLP_HEADERS=Authorization=Bearer <ORQ_API_KEY>
OTEL_RESOURCE_ATTRIBUTES=service.name=vercel-ai-app,service.version=1.0.0
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>

Integrations

The Vercel AI SDK has built-in OpenTelemetry support through the experimental_telemetry option. Here’s how to integrate it with Orq.ai: The simplest way to enable telemetry is using the SDK’s native support:
// instrumentation.js
import { registerOTel, OTLPHttpJsonTraceExporter } from '@vercel/otel';

export function register() {
  registerOTel({
    serviceName: 'your-project-name',
    traceExporter: new OTLPHttpJsonTraceExporter({
      url: 'https://api.orq.ai/v2/otel/v1/traces',
      headers: {
        'Authorization': "Bearer $ORQ_API_KEY",
      },
    }),
  });
}
// index.js
import './instrumentation.js'
import { generateText, streamText, generateObject } from "ai";
import { openai } from "@ai-sdk/openai";

// Simple usage with telemetry enabled
const result = await generateText({
  model: openai("gpt-4.1"),
  prompt: "Write a short story about a robot",
  experimental_telemetry: {
    isEnabled: true,
  },
});

// Advanced configuration with custom metadata
const resultWithMetadata = await generateText({
  model: openai("gpt-4.1"),
  prompt: "Explain quantum computing",
  experimental_telemetry: {
    isEnabled: true,
    functionId: "quantum-explanation",
    metadata: {
      userId: "user-123",
      requestId: "req-456",
      environment: "production",
    },
  },
});

// Control what data is recorded
const resultWithPrivacy = await generateText({
  model: openai("gpt-4.1"),
  prompt: userPrompt,
  experimental_telemetry: {
    isEnabled: true,
    recordInputs: false, // Don't record prompts
    recordOutputs: false, // Don't record responses
  },
});
Here the main factor to enable telemetry is to include the following payload when generating text. This can be used across the board.
  experimental_telemetry: {
    isEnabled: true,
    recordInputs: false, // Don't record prompts
    recordOutputs: false, // Don't record responses
  },