Model Garden
The Model Garden lets you control all available models usable in Orq.ai
The Model Garden is the space where you can find all available LLM models for your work. Orq.ai packages a number of LLM you can search and enable for your work, you're also able to onboarding private models into the platform.
Only workspace admins have access to the Model Garden page and can enable or disable models as needed.
Benefits of Using the Model Garden
Use the latest models available
Using the Model Garden lets you benefit from all model releases from various provider as Orq takes care of integrating them. When new models are available, you can find them in your garden and enable them for use, they will be immediately functional.
Unified API for all your models
Once a new model has been enabled in the garden, it can be immediately used within a Deployment.
As your application is integrated with Orq.ai, it doesn't need any update as our APIs stay the same even after changing model.
Orq.ai ensures integration and readiness of any model available in the Model Garden, letting you focus on your application by using it as an AI Gateway.
Enabling new Models in your Workspace
To see your model garden head to the Model Garden section in your orq.ai panel.
You have access to multiple filters to search models:
- Providers, lets you filter which LLM provider you want to see
- Model Type, lets you decide on which type of model you want to see (Chat, Completion, Embedding, Rerank, Vision).
- Active, lets you filter on enabled or disabled models in your workspace.
- Owner, lets you filter between Orq.ai provided models and private models.
You can preview the pricing of each model by hovering on the Pricing tag.
To enable a model simply enable the toggle at the top-right of their card, they will immedately be available in your Playgrounds, Experiments and Deployments for use.
Using your own API keys
All models available through the model garden are usable through Orq.ai without an API key, your usage will be billed within your subscription.
You can decide to use your own API keys within Orq, to do so see API keys & Endpoints. You can also directly integrate your Azure, AWS Bedrock, and Vertex AI.
Onboarding Private Models
You can onboard private models by choosing Add Model at the top-right of the screen.
This can be useful when you have a model fine-tuned outside of Orq.ai that you want to use within a Deployment.
Currently we only support the self service onboarding of private models through Azure, if you wish to onboard other private models, please contact [email protected] and we'll help you out.
Updated 7 days ago