Skip to main content

Enabling new Models in your Workspace

To get started with enabling models in your Workspace, ensure you connected your API keys to the desired Providers, to learn more see Connecting Providers.
To see your model garden head to the Model Garden section in your orq.ai Studio.
Model Garden Updated 315 Pn

Searching through all available models.

You have access to multiple filters to search models:
  • Providers let you filter which LLM provider you want to see.
  • Model Type lets you decide on which type of model you intend to see (Chat, Completion, Embedding, Rerank, Vision).
  • Active lets you filter on enabled or disabled models in your workspace.
  • Owner lets you filter between Orq.ai provided models and private models.
You can now filter by Location to display models available in Europe, US, or globally only.Model Garden Filter Location Pn
You can preview the pricing of each model within the Pricing column. To enable a model, toggle it on. It will immediately be available in any Prompt.

Using your own API keys

To start using models, you have to bring your own keys, head to the Providers tab to use your own API keys with the supported providers.
For customers in the Enterprise package, you can benefit from Orq.ai’s API keys for all models and be rebilled through your subscription.

Onboarding Private Models

You can onboard private models by choosing Add Model at the top-right of the screen. This can be useful when you have a model fine-tuned outside of orq.ai that you want to use within a Deployment.

Private Models Providers

Referencing Private Models in Code

When referencing private models through our SDKs, API or Supported Libraries, the model is referenced by the following string: <workspacename>@<provider>/<modelname>.
Example: corp@azure/gpt-4o-2024-05-13