Enabling new Models in your Workspace
To see your model garden head to the Model Garden section in your orq.ai Studio.
Searching through all available models.
- Providers let you filter which LLM provider you want to see.
- Model Type lets you decide on which type of model you intend to see (Chat, Completion, Embedding, Rerank, Vision).
- Active lets you filter on enabled or disabled models in your workspace.
- Owner lets you filter between Orq.ai provided models and private models.
Using your own API keys
All models available through the model garden are usable through orq.ai without an API key, your usage will be billed within your subscription.You can decide to use your own API keys within orq.ai, to do so see Providers. You can also directly integrate your Azure OpenAI, Amazon Bedrock, Google Vertex AI, or LiteLLM.
Onboarding Private Models
You can onboard private models by choosing Add Model at the top-right of the screen. This can be useful when you have a model fine-tuned outside of orq.ai that you want to use within a Deployment.Private Models Providers
Referencing Private Models in Code
When referencing private models through our SDKs, API or Supported Libraries, the model is referenced by the following string:<workspacename>@<provider>/<modelname>.
Example: corp@azure/gpt-4o-2024-05-13


