Vercel AI SDK
Integrate Orq.ai with Vercel AI SDK using OpenTelemetry
Getting Started
The Vercel AI SDK provides powerful React hooks and utilities for building AI-powered applications with built-in OpenTelemetry support. The SDK includes experimental telemetry features that automatically capture detailed traces of AI operations, making integration with Orq.ai straightforward for comprehensive observability.
Prerequisites
Before you begin, ensure you have:
- An Orq.ai account and API key
- Vercel AI SDK v3.1+ (with telemetry support)
- Node.js 18+ and TypeScript support
- API keys for your LLM providers (OpenAI, Anthropic, etc.)
Install Dependencies
# Core Vercel AI SDK with latest version
npm install ai@latest
# OpenTelemetry packages
npm install @opentelemetry/api @opentelemetry/sdk-node @opentelemetry/exporter-trace-otlp-http
npm install @opentelemetry/instrumentation @opentelemetry/resources
npm install @opentelemetry/semantic-conventions
# Provider SDKs (choose what you need)
npm install @ai-sdk/openai @ai-sdk/anthropic @ai-sdk/google
# Optional: For React applications
npm install @ai-sdk/react
Configure Orq.ai
Set up your environment variables to connect to Orq.ai's OpenTelemetry collector:
Unix/Linux/macOS:
export OTEL_EXPORTER_OTLP_ENDPOINT="https://api.orq.ai/v2/otel"
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer <ORQ_API_KEY>"
export OTEL_RESOURCE_ATTRIBUTES="service.name=vercel-ai-app,service.version=1.0.0"
export OPENAI_API_KEY="<YOUR_OPENAI_API_KEY>"
Windows (PowerShell):
$env:OTEL_EXPORTER_OTLP_ENDPOINT = "https://api.orq.ai/v2/otel"
$env:OTEL_EXPORTER_OTLP_HEADERS = "Authorization=Bearer <ORQ_API_KEY>"
$env:OTEL_RESOURCE_ATTRIBUTES = "service.name=vercel-ai-app,service.version=1.0.0"
$env:OPENAI_API_KEY = "<YOUR_OPENAI_API_KEY>"
Using .env file:
OTEL_EXPORTER_OTLP_ENDPOINT=https://api.orq.ai/v2/otel
OTEL_EXPORTER_OTLP_HEADERS=Authorization=Bearer <ORQ_API_KEY>
OTEL_RESOURCE_ATTRIBUTES=service.name=vercel-ai-app,service.version=1.0.0
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
Integrations
The Vercel AI SDK has built-in OpenTelemetry support through the experimental_telemetry
option. Here's how to integrate it with Orq.ai:
Method 1: Built-in Telemetry (Recommended)
The simplest way to enable telemetry is using the SDK's native support:
// lib/tracing.ts
import { NodeSDK } from "@opentelemetry/sdk-node";
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-http";
import { Resource } from "@opentelemetry/resources";
import { SemanticResourceAttributes } from "@opentelemetry/semantic-conventions";
const traceExporter = new OTLPTraceExporter({
url: process.env.OTEL_EXPORTER_OTLP_ENDPOINT + "/v1/traces",
headers: {
Authorization: process.env.OTEL_EXPORTER_OTLP_HEADERS?.split("=")[1] || "",
},
});
const sdk = new NodeSDK({
resource: new Resource({
[SemanticResourceAttributes.SERVICE_NAME]: "vercel-ai-app",
[SemanticResourceAttributes.SERVICE_VERSION]: "1.0.0",
}),
traceExporter,
});
// Initialize the SDK and register with the OpenTelemetry API
sdk.start();
export default sdk;
Using Native Telemetry
Enable telemetry on any AI SDK function call:
import { generateText, streamText, generateObject } from "ai";
import { openai } from "@ai-sdk/openai";
// Simple usage with telemetry enabled
const result = await generateText({
model: openai("gpt-4.1"),
prompt: "Write a short story about a robot",
experimental_telemetry: {
isEnabled: true,
},
});
// Advanced configuration with custom metadata
const resultWithMetadata = await generateText({
model: openai("gpt-4.1"),
prompt: "Explain quantum computing",
experimental_telemetry: {
isEnabled: true,
functionId: "quantum-explanation",
metadata: {
userId: "user-123",
requestId: "req-456",
environment: "production",
},
},
});
// Control what data is recorded
const resultWithPrivacy = await generateText({
model: openai("gpt-4.1"),
prompt: userPrompt,
experimental_telemetry: {
isEnabled: true,
recordInputs: false, // Don't record prompts
recordOutputs: false, // Don't record responses
},
});
Method 2: Custom Tracer Integration
For more control, provide a custom tracer:
import { trace } from "@opentelemetry/api";
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
const tracer = trace.getTracer("my-app", "1.0.0");
const result = await generateText({
model: openai("gpt-4.1"),
prompt: "Hello, world!",
experimental_telemetry: {
isEnabled: true,
tracer: tracer, // Use your custom tracer
},
});
Examples
Basic Text Generation with Telemetry
// app.ts
import "./lib/tracing"; // Initialize OpenTelemetry first
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
async function basicExample() {
const result = await generateText({
model: openai("gpt-4"),
prompt: "Explain quantum computing in simple terms",
temperature: 0.7,
maxTokens: 200,
experimental_telemetry: { isEnabled: true },
});
console.log("Generated text:", result.text);
console.log("Token usage:", result.usage);
}
basicExample().catch(console.error);
Streaming Text with Telemetry
import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";
async function streamingExample() {
const result = await streamText({
model: openai("gpt-4"),
prompt: "Write a short story about a robot discovering emotions",
temperature: 0.8,
experimental_telemetry: {
isEnabled: true,
functionId: "story-generation",
},
});
let chunkCount = 0;
let totalLength = 0;
for await (const delta of result.textStream) {
process.stdout.write(delta);
chunkCount++;
totalLength += delta.length;
}
console.log(
`\n\nStreaming completed: ${chunkCount} chunks, ${totalLength} total characters`,
);
}
streamingExample().catch(console.error);
Object Generation with Telemetry
import { generateObject } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";
const PersonSchema = z.object({
name: z.string(),
age: z.number(),
occupation: z.string(),
bio: z.string(),
});
async function objectGenerationExample() {
const result = await generateObject({
model: openai("gpt-4"),
schema: PersonSchema,
prompt: "Generate a fictional character profile for a detective story",
experimental_telemetry: {
isEnabled: true,
metadata: {
schemaType: "PersonSchema",
useCase: "story-generation",
},
},
});
console.log("Generated object:", result.object);
console.log(
"Schema validation passed:",
PersonSchema.safeParse(result.object).success,
);
}
objectGenerationExample().catch(console.error);
Next.js API Route with Telemetry
// app/api/chat/route.ts
import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";
export async function POST(req: Request) {
const { messages } = await req.json();
const result = await streamText({
model: openai("gpt-4"),
messages,
temperature: 0.7,
maxTokens: 500,
experimental_telemetry: {
isEnabled: true,
functionId: "chat-api",
metadata: {
route: "/api/chat",
conversationLength: messages.length,
},
},
});
return result.toAIStreamResponse();
}
React Hook with Server-Side Telemetry
// components/ChatComponent.tsx
import { useChat } from "ai/react";
export function ChatComponent() {
const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat({
api: "/api/chat", // Server endpoint with telemetry enabled
// Client-side telemetry is automatically handled through the server
onError: (error) => {
console.error("Chat error:", error);
}
});
return (
<div className="chat-container">
<div className="messages">
{messages.map((message) => (
<div key={message.id} className={`message ${message.role}`}>
{message.content}
</div>
))}
{isLoading && <div className="loading">AI is thinking...</div>}
</div>
<form onSubmit={handleSubmit}>
<input
type="text"
value={input}
onChange={handleInputChange}
placeholder="Type your message..."
disabled={isLoading}
/>
<button type="submit" disabled={isLoading}>
Send
</button>
</form>
</div>
);
}
Tool Usage with Telemetry
import { generateText, tool } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";
// Define tools - telemetry automatically tracks tool executions
const getWeather = tool({
description: "Get current weather for a location",
parameters: z.object({
location: z.string().describe("City name"),
}),
execute: async ({ location }) => {
// Mock weather API call
const weather = {
location,
temperature: Math.round(Math.random() * 30 + 10),
condition: "sunny",
};
return `The weather in ${location} is ${weather.condition} with a temperature of ${weather.temperature}°C.`;
},
});
const calculateTip = tool({
description: "Calculate tip amount",
parameters: z.object({
amount: z.number().describe("Bill amount"),
percentage: z.number().describe("Tip percentage"),
}),
execute: async ({ amount, percentage }) => {
const tip = (amount * percentage) / 100;
const total = amount + tip;
return `Tip: $${tip.toFixed(2)}, Total: $${total.toFixed(2)}`;
},
});
async function toolUsageExample() {
const result = await generateText({
model: openai("gpt-4"),
tools: { getWeather, calculateTip },
prompt: "What's the weather in Paris? Also, calculate a 15% tip on a $50 bill.",
experimental_telemetry: {
isEnabled: true,
functionId: "multi-tool-example",
metadata: {
toolsAvailable: ["getWeather", "calculateTip"]
}
}
});
console.log("Response:", result.text);
console.log("Tool calls:", result.toolCalls);
}
toolUsageExample().catch(console.error);
Next Steps
✅ Verify traces: Check your Orq.ai dashboard to see incoming traces ✅ Add custom attributes: Enhance traces with business-specific metadata ✅ Set up alerts: Configure monitoring for performance degradation ✅ Explore metrics: Use trace data for performance optimization
Related Documentation
Support
Updated about 24 hours ago