Google Gemini Pro 1.5

by Cormick Marskamp

Start using Google's Gemini Pro 1.5 across all three modules on Orq. The newest model from Google is not only more advanced than its predecessor but also boasts several key differentiating features.

Top reasons why you should start testing Gemini Pro 1.5 today:

  • 1 million context window! - Allowing it to understand a much greater context than before.
  • Complex multimodality reasoning - It's able to understand video, audio, code, image, and text.

Vision

by Cormick Marskamp

Start using vision models to unlock the even greater potential of Generative AI.

Vision models and image models are different. While image models create images, vision models interpret them.

We currently support the following vision models:

  • Claude-3-haiku
  • Claude-3-sonnet
  • Claude-3-opus
  • GPT-4-turbo-vision
  • GPT-4-vision-preview
  • Gemini-pro-vision

We now support the latest models from Cohere: Command R & Command R+.

Command R is a highly advanced language model designed to facilitate smooth and natural conversations and perform complex tasks requiring long context. It belongs to the "scalable" category of models that provide both high performance and accuracy.

The most important key strengths of Command R:

  • High precision on retrieval augmented generation (RAG) and tool use tasks (function calling)
  • Low latency and high throughput
  • Large context window 128,000-tokens
  • Multilingual capabilities across 10 key languages, with pre-training data for 13 additional languages

Prompt Studio V2

by Cormick Marskamp

Today we released the new Prompt Studio V2. Similar to the updated Playground V2, the Prompt Studio is designed for intuitive prompt engineering and consistency across the entire platform.

The Prompt Studio can be found when clicking on a variant in the business rules engine. Within the Prompt Studio, you can configure your LLM call. It supports function calling (tools), variable inputs, retries, and fallbacks.

Check out the interactive walkthrough below to test it out yourself.


🚧

We recently transitioned our SDK's from @orquesta to @orq-ai across all platforms to align with our branding and focus on AI. This includes some breaking changes!

Python SDK

Installation

Before

pip install orquesta-sdk

After

pip install orq-ai-sdk

Usage

Before

from orquesta_sdk import Orquesta, OrquestaClientOptions

api_key = os.environ.get("ORQUESTA_API_KEY", "__API_KEY__")

options = OrquestaClientOptions(
    api_key=api_key,
    environment="production"
)

client = Orquesta(options)

After

import os

from orq_ai_sdk import OrqAI

client = OrqAI(
   api_key=os.environ.get("ORQ_API_KEY", "__API_KEY__"),
   environment="production"
)

Node SDK

Installation

Before

npm install @orquesta/node
yarn add @orquesta/node

After

npm install @orq-ai/node
yarn add @orq-ai/node

Usage

Before

import { createClient } from '@orquesta/node';

const client = createClient({
  apiKey: 'orquesta-api-key',
  environment: 'production',
});

After

import { createClient } from '@orq-ai/node';

const client = createClient({
  apiKey: 'orquesta-api-key',
  environment: 'production',
});

Playground V2

by Cormick Marskamp

Today, we launched our new Playground V2. This updated layout is designed for prompt engineering. By keeping the prompt template separate from the chat messages, it's easier to fine-tune your prompt. When you clear the chat messages, the prompt template will still be there.

Check out our new Playground V2 in the interactive walkthrough below!

With the launch of Claude 3, Anthropic is challenging the status quo. Its most advanced model 'Opus' is supposedly better than GPT-4. However, it's also around 2.5 times more expensive than GPT-4. Haiku, the cheapest and least capable model, is around the same price as GPT-3.5 and is really fast.

On another note, none of the Claude models support image and video generation. However, they do have the ability to interpret and analyze images. Moreover, they do have a default context window of 200k tokens, making it useful for larger data sets and variables.

The Sonnet and Haiku models are available on Anthropic as well as on AWS Bedrock. Whereas Opus is currently only available on Anthropic.

You can toggle them on in the Model Garden and try them out yourself!

Hyperlinking

by Cormick Marskamp

With the new Hyperlinking feature, you are able to take your use case from one module to the other. Switching between the Playground, Experiments, and Deployments allows you to make quick iterations throughout the whole platform. Whether you want to take your Playground setup to Experiments or your Deployment to Playground, it's all possible.

Example: You are running a Deployment but want to quickly test out what would happen if you change your prompt. You don't want to change the prompt within your Deployment because it's in production. With the new Hyperlinking feature, you can open the exact same configuration used in one module in the other. Follow these steps to try it out yourself:

  1. Navigate towards the Logs tab
  2. Click on the log that has the configuration that you want to iterate on
  3. Click on the tree-dotted menu in the upper right corner
  4. Click open in Playground
  5. Click on 'View Playground'
  6. You'll be redirected the Playground and see the exact same configuration you are using in your Deployment