Using the Model Garden
Enabling new Models in your Workspace
To see your model garden head to the Model Garden section in your orq.ai Studio.

Searching through all available models.
You have access to multiple filters to search models:
- Providers, lets you filter which LLM provider you want to see
- Model Type, lets you decide on which type of model you want to see (Chat, Completion, Embedding, Rerank, Vision).
- Active, lets you filter on enabled or disabled models in your workspace.
- Owner, lets you filter between Orq.ai provided models and private models.
You can preview the pricing of each model by hovering on the Pricing tag.
To enable a model simply enable the toggle at the top-right of their card, they will immediately be available in your Prompt.
Using your own API keys
All models available through the model garden are usable through orq.ai without an API key, your usage will be billed within your subscription.
You can decide to use your own API keys within orq.ai, to do so see Integration. You can also directly integrate your Azure OpenAI, Amazon Bedrock, Google Vertex AI.
Onboarding Private Models
You can onboard private models by choosing Add Model at the top-right of the screen. This can be useful when you have a model fine-tuned outside of orq.ai that you want to use within a Deployment.
Private Models Providers
Azure
Here is an example configuration for an Azure model, entering the endpoint and API Key will make your private model available on the platform.

Vertex AI
Here is an example configuration for a Google Vertex AI model, enter the JSON configuration to make a private model available on the platform.

LiteLLM
To import LiteLLM models, first create an Integration to your LiteLLM instance. Once created, come back to the Model Garden and Import models from your Instance.

Currently we only support the self service onboarding of private models through the providers above. If you wish to onboard other private models, please contact [email protected] and we'll help you out.
Updated 3 days ago