Quick start

Jumpstart your journey with orq.ai's LLM Ops platform. Our Quick Start Guide provides step-by-step instructions to navigate the platform, optimize your LLM operations, and elevate your AI capabilities. Harness the power of AI with orq.ai now.

Welcome to orq.ai, your gateway to the future of Large Language Model Operations (LLMOPs). If you're ready to harness the full potential of public and private LLMs while achieving complete transparency on performance and costs, you're in the right place.

With orq.ai, you can supercharge your SaaS applications using no-code collaboration tooling for prompt engineering, experimentation, operations, and monitoring. Say goodbye to lengthy release cycles and say hello to a world where innovation happens in minutes, not weeks.

In this Quick Start guide, we'll walk you through the essentials so you can hit the ground running with orq.ai. Let's dive in!

Step 1: Setup your workspace

Step 2: Enable models in the Model Garden

Orq.ai allows you to pick and enable the models of your choice and work with them. Enabling a model(s) is very easy; navigate to the Model Garden and enable the models of your choice.

Step 3: Set up your first Deployment

Deployments are at the heart of orq.ai. Within Deployments, orq.ai handles all the complexity for you, including the actual LLM call and full logging of responses. On top of that, Deployments have additional superpowers, where you can configure retries and fallback models.

Learn more about Deployments

  • Go to Deployments.
  • Create your first Deployments by clicking Add Deployment and filling out the form. Add your Deployment key and a Domain.
  • Add a variant to the Deployment.
  • Configure your Deployment variant to suit your specific needs by adding the prompt message, primary model, retries, fallback model, and some notes for your team (optional).
  • You can add variables to the prompt message by using the {{ }} then pass the variable name in between the curly braces, for example: {{ question }}.
  • You can also create a function by clicking on the + icon in the Tools section. Once you are done, click on the Save button.
  • Click on the Deploy button.

Step 4: Install the orq.ai SDK

  • Utilize the Code Snippet Generator at each Deployment to select your preferred programming language.
  • Alternatively, you can locate your software development kit (SDK).
  • Execute the installation steps by embedding a couple of lines of code.
//npm
npm install @orq-ai/node --save

//yarn
yarn add @orq-ai/node
#pip
pip install orq-ai-sdk

Step 5: Consume your first Deployment from your application

  • Retrieve your API key from your workspace settings page: https://my.orquesta.cloud/<workspacekey>/settings/developers
    Click on Reveal key, and click on the key to copy it.
  • In any part of your application where you want to consume the Deployment, implement the usage steps provided by the Code Snippet Generator.
// Creating a client instance
import { createClient } from '@orq-ai/node';

const client = createClient({
  api_key: '__API_KEY__',
  environment: 'production',
});


// Usage
const deployment = await client.deployments.invoke({
   key: "Prompt-engineering-examples",
   context: {
      environments: [ "test", "develop" ],
      locale: [ "en"  ],
      "user-segment":  [ "employees" ],
      "user-role": [ "editor"]
   },
   metadata: {
      "custom-field-name": "custom-metadata-value"
   }
});

// Streaming your deployment
const stream = client.deployments.invokeWithStream({
   key: "Prompt-engineering-examples",
   context: {
      environments: [ "test", "develop" ],
      locale: [ "en"],
      "user-segment": ["employees"],
      "user-role": ["editor"]
   },
   metadata: {
      "custom-field-name": "custom-metadata-value"
   }
});

for await (const chunk of stream) {
  console.log(chunk.choices?.[0]?.message?.content);

  if(chunk.is_final){
    console.log('Stream finished')
  }
}
# Create a client instance
import os

from orq_ai_sdk import Orquesta, OrquestaClientOptions

api_key = os.environ.get("ORQUESTA_API_KEY", "__API_KEY__")

options = OrquestaClientOptions(
    api_key=api_key,
    environment="production",
)

client = Orquesta(options)

# Usage
deployment = client.deployments.invoke(
  key="Prompt-engineering-examples",
  context={
    "environments": [ "test",  "develop"],
    "locale": ["en" ],
    "user-segment": [ "employees" ],
    "user-role": [ "editor"]
  },
  metadata={"custom-field-name":"custom-metadata-value"}
)

print(deployment.choices[0].message.content)

# streaming your deployment
deployment_stream = client.deployments.invoke_with_stream(
  key="Prompt-engineering-examples",
  context={
    "environments": [ "test",
        "develop"
    ],
    "locale": [
        "en"
    ],
    "user-segment": [
        "employees"
    ],
    "user-role": [
        "editor"
    ]
  },
  metadata={"custom-field-name":"custom-metadata-value"}
)

for chunk in deployment_stream:
    print("Received data:", chunk.content)

    if chunk.is_final:
        print("Stream is finished")

You can toggle the code snippet generator by clicking the </> button.