Enabling new Models in your Workspace
To get started with enabling models in your Workspace, ensure you connected your API keys to the desired Providers, to learn more see Connecting Providers.

- Providers let you filter which LLM provider you want to see.
- Model Type lets you decide on which type of model you intend to see (Chat, Completion, Embedding, Rerank, Vision).
- Active lets you filter on enabled or disabled models in your workspace.
- Owner lets you filter between Orq.ai provided models and private models.
- API Key Status lets you filter models for which you have added an API key.
Connecting Providers
For production workloads, use your API keys with the supported Providers. To set up your own API Key, head to the AI Studio and to the AI Router > Providers section. Choose the provider you wish to set an API key for and press Connect, then select Setup your own API Key
Deeper Provider Integrations
Some providers require specific configurations, see the related documentation:Having Multiple API Keys
You can decide to configure multiple API for a single provider, to do so, select Add a new API key.Benefits of using multiple API Keys
Credential Failure
Having a different API key available to use models can be useful in case one becomes invalid or for instance runs out of credit. Having an extra key configured on a fallback model can make sure you respond in all cases.Multiple Environments
In case you have multiple API keys used for different purposes, this lets you organize your models to use the credentials dedicated to the correct environment.Using a specific API keys in model configuration
Once your API keys are configured within the Providers panel, you can use them within Playground, Experiment, Deployment, and Agent. This feature is accessible to any model, including Fallback models.
Onboarding Private Models
You can onboard private models by choosing Add Model at the top-right of the screen. This can be useful when you have a model fine-tuned outside of orq.ai that you want to use within a Deployment.Private Models Providers
Referencing Private Models in Code
When referencing private models through our SDKs, API or Supported Libraries, the model is referenced by the following string:<workspacename>@<provider>/<modelname>.
Example: corp@azure/gpt-4o-2024-05-13
Auto Router
The Auto Router is a virtual model that automatically routes each request to the most appropriate model for it. Simple requests go to a cheaper model; requests that benefit from higher quality go to a stronger one. The routing decision is based on predicted human preference, computed in real time from the content of the prompt. To create an Auto Router, click Add Model at the top-right of the AI Router and select Auto Router from the dropdown.
- Model ID: a unique identifier for this router. Lowercase letters, numbers, and hyphens only (e.g.
my-auto-router). This is how you reference the router in your deployments. - Strong Model: the higher-quality model to route to when the prompt warrants it.
- Economical Model: the cheaper model to use for simpler requests.
- Profile: controls how aggressively the router favours the strong model:
| Profile | Behaviour |
|---|---|
| Quality | Routes more requests to the strong model. Prioritises quality over cost. |
| Balanced | Distributes requests between both models based on prompt complexity. |
| Cost | Favours the economical model more aggressively to reduce spend. |
Recommended model pairs
These pairs combine high routing accuracy with significant cost ratios (over 10x), making them effective starting points for Auto Router configurations.| Strong Model | Budget Model |
|---|---|
| Google Gemini 2.5 Pro | Google Gemini 2.5 Flash |
| OpenAI GPT-5.1 | OpenAI GPT-4o Mini |
| Anthropic Claude Opus 4 | Google Gemini 3 Flash |
| OpenAI GPT-4o | OpenAI GPT-4o Mini |
Referencing an Auto Router in Code
When using an Auto Router through the SDKs, API, or Supported Libraries, reference it by the following string:<workspacename>@orq/<model-id>.
Example: acme@orq/my-auto-router
Credits
The Credits tab lets you manage your AI Router balance, add payment methods, configure automatic top-ups, and track all credit transactions.Available Balance
Your current credit balance is displayed at the top of the Credits tab. Credits are consumed as you make requests through the AI Router. Each model call deducts from your balance based on token usage and model pricing.When you exceed the traces included in your package, tracing costs are also deducted from your credits. Credit purchases have a minimum amount of $5.
Add Credits
Click Add Credits to purchase additional credits and top up your balance.Add a Tax ID (optional)
Expand the Edit Tax ID section to add your tax identifier. Tax IDs are supported globally. Select your country and the applicable tax ID type, then enter your number. This is optional but recommended for businesses that need it reflected on invoices.
Enter your name and billing address
Provide your name and billing address. This is required for payment processing.
Payment Method
Add a credit card to enable credit purchases and auto top-up. Click Add new in the Payment Method panel to register a card.At least one payment method is required to enable auto top-up.
Auto Top-up
Auto top-up ensures your balance never runs out by automatically purchasing credits when it drops below a threshold. To configure auto top-up:- Enable the Auto Top-up toggle.
- Set the If balance drops below threshold (e.g. $50). A top-up is triggered when your balance falls below this amount.
- Set the Add credits amount (e.g. $200). This is the amount automatically added each time.
- Click Save changes.
Transaction History
The Transaction History table shows a log of credit purchases only. Usage deductions from individual API calls are not listed here.| Column | Description |
|---|---|
| Date | The date of the transaction |
| Time | The time of the transaction |
| Type | The transaction type: Purchase (manual top-up) or Auto Top-up (automatic) |
| Amount | The credit amount added to your balance |
Invoices
Each credit purchase generates an invoice. To open an invoice, click the icon on the corresponding row in the Transaction History table and select Download Invoice.Settings
The Settings tab lets you manage which models are used by default across your organization.
- Chat: the main model for all chat and conversation flows.
- Tool calling: handles structured outputs and function calls to external tools or APIs.
- Embedding: generates vector representations for search and recommendations.
- Insight: analyzes data, summarizes results, and generates insights.



