Setup your own API key with the new AnyScale integration.
You could already select models such as Llama from Meta and Mixtral from Mistral in the model garden. But with this release, it is now possible to connect your own API key for AnyScale. This way you can use your own account and rate limits without having to rely on a shared key. Soon you'll be able to use your own private models and finetuning on AnyScale.
Mass Experimentation
You could already test out different models and configurations using our playground. However, with the introduction of Experiments, you are able to do this on a much larger scale.
Python and Node SDK's now support multimodality
We improved our SDKs to support more types of models. Now, it's possible to use completion, chat, and image models through our unified API.
Function calling supported in the Playground
For a better UX on Orquesta, we improve how to handle function calling in the Playground and the Prompt Studio.
Billable transactions are now shown in the log details
Now, it is possible to see in the log details if the request to the LLM was executed with Orquesta API keys or your API keys.
Gemini, Google’s most capable model, now available on Orquesta
Now, using Gemini Pro on Orquesta through the Playground and Deployments is possible. This model also supports function calling.
Log and metrics enhancement
We introduced a new log detail drawer for a better and more segmented data structure. This will allow our users to consume the information more easily.
Chat history now available in the playground
If you are using the Playground, you can now utilize Playground memory. This feature enables you to add historical context when comparing different models simultaneously.
Support for Dall-E models
Added support for Dall-E models in the Playground and Deployments for OpenAI and Azure.
orq.ai Integrations
We added a new Integrations section to allow the use of your API keys. At the moment, we support custom API keys from the following LLM providers