Enabling new Models in your Workspace
To get started with enabling models in your Workspace, ensure you connected your API keys to the desired Providers, to learn more see Connecting Providers.

- Providers let you filter which LLM provider you want to see.
- Model Type lets you decide on which type of model you intend to see (Chat, Completion, Embedding, Rerank, Vision).
- Active lets you filter on enabled or disabled models in your workspace.
- Owner lets you filter between Orq.ai provided models and private models.
- API Key Status lets you filter models for which you have added an API key.
Connecting Providers
For production workloads, use your API keys with the supported Providers. To set up your own API Key, head to the AI Studio and to the Model Garden > Providers section. Choose the provider you wish to set an API key for and press Connect, then select Setup your own API Key
Deeper Provider Integrations
Some providers require specific configurations, see the related documentation:Having Multiple API Keys
You can decide to configure multiple API for a single provider, to do so, select Add a new API key.Benefits of using multiple API Keys
Credential Failure
Having a different API key available to use models can be useful in case one becomes invalid or for instance runs out of credit. Having an extra key configured on a fallback model can make sure you respond in all cases.Multiple Environments
In case you have multiple API keys used for different purposes, this lets you organize your models to use the credentials dedicated to the correct environment.Using a specific API keys in model configuration
Once your API keys are configured within the Integration panel, you can use them within Playground, Experiment, Deployment, and Agent. This feature is accessible to any model, including Fallback models.
Onboarding Private Models
You can onboard private models by choosing Add Model at the top-right of the screen. This can be useful when you have a model fine-tuned outside of orq.ai that you want to use within a Deployment.Private Models Providers
Referencing Private Models in Code
When referencing private models through our SDKs, API or Supported Libraries, the model is referenced by the following string:<workspacename>@<provider>/<modelname>.
Example: corp@azure/gpt-4o-2024-05-13



