What are Traces
Traces let you dive into the workflow of each model generation and understand the inner workings of an LLM call on the AI Router. Traces correspond to events within the generations, following each call to the model configured within the router.Use Cases
- Monitor Performance - Identify bottlenecks by seeing which operations take the most time. Use this to optimize your prompts or model selections.
- Track Costs - See exactly which operations are consuming tokens and costing money. Understand the cost breakdown across different models and operations.
- Debug Issues - When something goes wrong, traces show you exactly where in the request pipeline the failure occurred, helping you quickly identify root causes.
- Optimize Routing - For AI applications using model routing, traces show which models were selected and how the routing logic performed.
- Analyze Request Flow - Understand how your requests are being processed by seeing the complete operation hierarchy and dependencies.
Viewing Traces
To view Traces, head to the AI Router and choose the Traces page.
View and dive into your Traces here.
- Request Timeline: A hierarchical breakdown of all operations that occurred during your request, from routing decisions to model invocations.
- Operation Details: Each step in the trace shows:
- Model used and provider information.
- Token consumption (input/output).
- Cost for that specific operation.
- Status and any error information.
- Request Metadata:
- Unique trace ID for tracking.
- Total request duration.
- Aggregated token usage and cost.
Filtering Traces
You can filter traces to find specific requests or focus on particular aspects of your AI Router calls:| Filter | Description |
|---|---|
| Model | Filter by specific models used (e.g., gpt-4o, claude-3-sonnet) |
| Provider | Filter by provider (e.g., OpenAI, Anthropic, Google) |
| Status | Filter by request status (Success, Error, etc.) |
| Cost Range | Filter traces by cost (minimum and maximum values) |
| Duration | Filter by request execution duration |
| Date Range | Filter traces by when they were created |
| Deployment | Filter by specific deployment |
| Custom Attributes | Filter by metadata or custom attributes attached to requests |
Managing Columns
You can show and hide columns to display the data most relevant to your analysis. To customize columns:- Look for the button in the traces table header
- Toggle columns on or off to show/hide specific data such as:
- Model, Provider, Status
- Token usage (input/output)
- Cost, Duration, Latency
- Trace ID, Timestamp
- Custom metadata fields
Creating Custom Views
Save frequently used filter combinations as reusable views that can be shared across your team. To create a custom view:- Set your desired filters - Apply the filters you want to use (e.g., filter by model, status, date range)
- Click “All Rows” (top right of the traces panel)
- Select “Create New View”
- Give your view a title - Choose a descriptive name (e.g., “GPT-4o Errors”, “High-Cost Requests”)
- Choose Make this view private to keep this view personal (not shared with team members).
- Save - Your filtered view is now created and accessible