Integrating a Deployment
Learn how to integrate Orq.ai Deployments into your system and application.
Once you have setup Deployments with model configurations ready to be exposed to your users, you can start the integration process which involves invoking your deployments from within your environments.
In this document, we will see how to fetch prepared code snippets for your deployment and use them to integrate orq.ai in your systems.
If you don't have Deployments ready to be integrated, see Setting up a Deployment.
Getting Code Snippets
The first step for integration is fetching the code related to the chosen Deployment. Each Deployment can contain several variants.
Variant exposition is configured through Routing, to learn more see Routing.
You can see a snippet for a Variant in two ways:
Via the Routing Page
- Open a Deployment and go to the Routing Page.
- Right-Click on the Variant you want to integrate.
- Select Generate Code Snippet
Via the Variant Page
- Open a Deployment and go to the Variant Page.
- Press the Code Snippet icon at the top-right of the panel.
The following panel will open:
In this panel, all context attributes will be filled correctly so that your Routing rules are respected.
To learn more about context attributes and routing, see Routing.
Using Code Snippet
You have multiple integration languages available to integrate your Deployment.
Currently we support Python, Javascript (node) and shell (cURL).
Getting Credentials
The first step for an integration is to have an API key ready to be used.
If you don't have an API key yet, you can fetch one from your panel, see how in our Authentication Documentation.
Initializing a client
Depending on the chosen programming language, you will have different methods to initialize your client. All methods require the previously acquired API Key.
To learn more about client initialization, see our authentication tutorial using our Client Libraries
Invoking a deployment
Once your authentication layer is ready, you can Invoke your Deployment.
Invoking means sending a query to the underlying model, which can include your user's request; orq.ai takes care of operations to reach the correct language model with all prepared configurations and returns the model's response immediately.
To learn more about Deployment Invocation, see our tutorial using our Client Libraries
Once you have invoked a first deployment, there are more options available to you. Look into our libraries for:
Updated 6 days ago