Introduction
The newly released Knowledge Bases API offers users the ability to create, manage and enrich Knowledge Base using the orq.ai API. In this guide we will see how to manipulate the different entities while building a Knowledge Base.Prerequisite
API Key
To get started using the API, an API key is needed to use within SDKs or HTTP API.To get an API key ready, see Authentication.
SDKs
Creating a Knowledge Base
To create a Knowledge Base, we’ll be using the Create a Knowledge API. The necessary inputs to create a knowledge base are:keyto define its nameembedding_modelchoose here a model to create embedding.
To find an embedding model, head to the Model Garden and filter models withModel Type=Embedding. The value should be formatted as follow:supplier/model_name, example:cohere/embed-english-v3.0.
pathis the Project and folder the Knowledge Base will created in, formatted as followproject/path, exampleDefault/Production
(Optional) Uploading a File
The most common use case when building a knowledge base is uploading a file (e.g. a pdf) containing the data that you want models to search into. Before integrating your source file into the Knowledge Base, you need this file to be created and uploaded, for that, we’ll use the Create file API. To upload a file, simply point the API to the path of your file and set a name to it. The resulting call looks as follow:Creating a Datasource
A Datasource is the integral part of the Knowledge Base, it holds chunks of data within which a model can search and make retrievals returned within a RAG use case. A Knowledge base can hold any number of Datasources. To create a data source, we’ll be using the Create a datasource API. The following fields are needed:- knowledge_id, corresponding to the previously created
knowledge - (optional) file_id, from the previously created file, if you want to prepopulate the data source with your file.
- name, a name for the datasource.
datasource id.
Viewing Datasource Chunks
Once a Datasource is populated with a file or manually, it holds Chunks for each parts of data that can be searched and retrieved. To view chunks we are using the List all chunks for a datasource API. The only needed data is the previously acquireddatasource id and knowledge id.
The resulting call looks as follow:
The result contains lists of chunks containing the data for each chunk.
Chunking Data
Before adding Chunks to a Datasource, prepare the content by chunking the data to best fit the need of the Knowledge Base. orq exposes a Chunking text API that prepares data for Datasource ingestion.Common Parameters
All chunking strategies support these parameters:text(required): The text content to be chunkedstrategy(required): The chunking strategy to use (token, sentence, recursive, semantic, or agentic)metadata(optional, default:true): Whether to include metadata for each chunk (start_index, end_index, token_count)return_type(optional, default:"chunks"): Return format -"chunks"(with metadata) or"texts"(plain strings)
Chunking Strategies
Choose which chunking strategy to use for the source data:Token Chunker
Splits your text based on token count. Great for keeping chunks small enough for LLMs and for consistent embedding sizes. Additional Parameters:chunk_size(optional, default:512): Maximum tokens per chunkchunk_overlap(optional, default:0): Number of tokens to overlap between chunks
Sentence Chunker
Breaks your text at sentence boundaries, so each chunk stays readable and sentences remain intact. Additional Parameters:chunk_size(optional, default:512): Maximum tokens per chunkchunk_overlap(optional, default:0): Number of overlapping tokens between chunksmin_sentences_per_chunk(optional, default:1): Minimum number of sentences per chunk
Recursive Chunker
Chunks text by working down a hierarchy (paragraphs, then sentences, then words) to maintain document structure. Additional Parameters:chunk_size(optional, default:512): Maximum tokens per chunkseparators(optional, default:["\n\n", "\n", " ", ""]): Hierarchy of separators to use for splittingmin_characters_per_chunk(optional, default:24): Minimum characters allowed per chunk
Semantic Chunker
Groups together sentences that are topically related, so each chunk makes sense on its own. Additional Parameters:embedding_model(required): Embedding model to use for semantic similarity (e.g.,"openai/text-embedding-3-small")chunk_size(optional, default:512): Maximum tokens per chunkthreshold(optional, default:"auto"): Similarity threshold for grouping (0-1) or “auto” for automatic detectionmode(optional, default:"window"): Chunking mode -"window"or"sentence"similarity_window(optional, default:1): Window size for similarity comparisondimensions(optional): Number of dimensions for embedding output (required for text-embedding-3 models, range: 256-3072)max_tokens(optional, default:8191): Maximum number of tokens per embedding request
Agentic Chunker
Uses an LLM to determine the best split points, ideal for complex documents that need intelligent segmentation. Additional Parameters:model(required): Model to use for chunking (e.g.,"openai/gpt-4.1")chunk_size(optional, default:1024): Maximum tokens per chunkcandidate_size(optional, default:128): Size of candidate splits for LLM evaluationmin_characters_per_chunk(optional, default:24): Minimum characters allowed per chunk
Example API Call
Here is an example call using the semantic chunking strategyResponse Format
The API returns a JSON object with achunks array. Each chunk contains:
id: Unique identifier for the chunk (ULID format)text: The actual text content of the chunkindex: The position index of this chunk in the sequence (0-based)metadata(whenmetadata: true):start_index: Starting character position in the original textend_index: Ending character position in the original texttoken_count: Number of tokens in this chunk
Adding Chunk to a Datasource
It is possible to manually add a Chunk into a Datasource, to do so, we use the Create chunk API. The needed input data are:- The previously fetched knowledge_id.
- The desired Datasource to add data to.
- The Text to add to the chunk.
- Optional Metadata to enrich the chunk.
Metadata are used to further classify the chunk with additional information that can then be used to search more precisely through the knowledge base.To learn more about search, see Searching a Knowledge Base
Once a Knowledge Base is created, it can be used within a Prompt to Retrieve Chunk data during model generation. To learn more, see Using a Knowledge Base in a Prompt.
Knowledge Base Search
Knowledge Base search in Orq.ai allows you to query your uploaded documents and data using vector similarity search. You can perform semantic searches across your content and apply metadata filters to narrow results to specific subsets of your data. This enables you to build powerful RAG (Retrieval Augmented Generation) applications that can find relevant information from your knowledge base to enhance LLM responses. You can search knowledge bases using the dedicated Search Knowledge Base API, which provides programmatic access to perform queries with optional metadata filtering and search options.Basic Search
A basic search queries your knowledge base using semantic similarity to find the most relevant chunks for your query:Filter by metadata
In every chunk in the knowledge base, you can include metadata key-value pairs to store additional information. This metadata can be added via de Chunks API or via the Studio. When searching the knowledge base, you can include a metadata filter to limit the search to chunks matching a filter expression.A search without metadata filters omits metadata and performs a search on the entire Knowledge Base.
Metadata types
Metadata payloads must be key-value pairs in a JSON object. Keys must be strings, and values can be one of the following data types:- String
- Number
- Boolean
Metadata constraints
- Use metadata for concise, discrete filter attributes to maximize search performance.
- Avoid placing large text blobs in metadata, long strings will result in slower queries.
- Keep each field’s data type consistent. Our system attempts to coerce mismatched values during ingestion, non-coercible values are discarded and omitted from the chunk.
Metadata filter expressions
orq.ai Knowledge Base filtering is based on MongoDB’s query and projection operators. We currently supports a subset of those selectors:| Filter | Description | Example | Supported types |
|---|---|---|---|
$eq | Search chunks with metadata values that are equal to a specified value. | {"page_id": {"eq": "page_x1i2j3"}} | Number, string, boolean |
$ne | Search chunks with metadata values that are not equal to a specified value. | {"page_id": {"ne": "page_x1i2j3"}} | Number, string, boolean |
$gt | Search chunks with metadata values that are greater than a specified value. | {"edition": {"gt": 2019}} | Number |
$gte | Search chunks with metadata values that are greater than or equal to a specified value. | {"edition": {"gte": 2020}} | Number |
$lt | Search chunks with metadata values that are less than a specified value. | {"edition": {"lt": 2022}} | Number |
$lte | Search chunks with metadata values that are less than or equal to a specified value. | {"edition": {"lte": 2020}} | Number |
$in | Search chunks with metadata values that are in a specified array. | {"page_id": {"in": ["page_x1i2j3", "page_y1xiijas"]}} | String, number, boolean |
$nin | Search chunks with metadata values that are not in a specified array. | {"page_id": {"nin": ["comedy", "documentary"]}} | String, number, boolean |
$and | Joins query clauses with a logical AND. | {"$and": [{"page_id": {"eq": "page_x1i2j3"}}, {"edition": {"gte": 2020}}]} | Object (array of filter expressions) |
$or | Joins query clauses with a logical OR. | {"$or": [{"page_id": {"eq": "page_x1i2j3"}}, {"edition": {"gte": 2020}}]} | Object (array of filter expressions) |