Changelog

added

Chunking API

The Chunking API is now live - the final building block for fully programmatic knowledge base workflows. With this new API, you can automatically split (“chunk”) your text data in the way that best fits your needs, all without leaving your workflow.

added

Evaluators API

We’ve expanded evaluator support with two new APIs, enabling much more flexible, code-driven workflows:

added

Connect LiteLLM models to Orq

You can now connect your LiteLLM-managed models directly to Orq.ai, making it even easier for teams already using LiteLLM to get started with Orq—without having to change your LLM gateway.

added

Self onboard Vertex AI models

You can now onboard your own VertexAI models, including fine-tuned and private models.

added

Custom Human Review

Today we're introducing Custom human review, a major update that gives teams full flexibility in collecting and structuring human feedback in Orq.ai.

added

Evaluator playground

We’ve added a new way to quickly test your evaluators directly within the evaluator configuration screen.

added

Attach files directly to the model

With our latest update, you can now attach PDF files directly to Gemini and OpenAI models. Simply include the encoded file in your deployment invocation, and the model will natively process the PDF - no more manual file uploads or bloated prompts.

added

Evaluator runs without new generation

You can now evaluate existing LLM outputs without generating new responses. Previously, running an evaluator required creating a new output every time. With this update, you can retroactively score any response already stored in your dataset.

added

Reasoning field now included in the API response

With the latest update to our reasoning models, API responses now include the reasoning and reasoning_signature fields. These additions provide more transparency into how the model arrives at its answers.

added

Experiment Runs overview

You can now compare different Experiment runs in a single overview using the new Runs tab.