curl --request GET \
--url https://api.orq.ai/v2/deployments \
--header 'Authorization: Bearer <token>'{
"object": "list",
"data": [
{
"id": "3c90c3cc-0d44-4b50-8888-8dd25736052a",
"created": "<string>",
"updated": "<string>",
"key": "<string>",
"description": "<string>",
"prompt_config": {
"tools": [
{
"type": "function",
"function": {
"name": "<string>",
"parameters": {
"type": "object",
"properties": {},
"required": [
"<string>"
],
"additionalProperties": false
},
"description": "<string>",
"strict": true
},
"display_name": "<string>",
"id": 123
}
],
"model": "<string>",
"model_type": "chat",
"model_parameters": {
"temperature": 123,
"maxTokens": 123,
"topK": 123,
"topP": 123,
"frequencyPenalty": 123,
"presencePenalty": 123,
"numImages": 123,
"seed": 123,
"format": "url",
"dimensions": "<string>",
"quality": "<string>",
"style": "<string>",
"responseFormat": {
"type": "json_schema",
"json_schema": {
"name": "<string>",
"schema": {},
"description": "<string>",
"strict": true
},
"display_name": "<string>"
},
"photoRealVersion": "v1",
"encoding_format": "float",
"reasoningEffort": "none",
"budgetTokens": 123,
"verbosity": "low",
"thinkingLevel": "low"
},
"provider": "openai",
"messages": [
{
"role": "system",
"content": "<string>",
"tool_calls": [
{
"type": "function",
"function": {
"name": "<string>",
"arguments": "<string>"
},
"id": "<string>",
"index": 123
}
],
"tool_call_id": "<string>"
}
]
},
"version": "<string>"
}
],
"has_more": true
}Returns a list of your deployments. The deployments are returned sorted by creation date, with the most recent deployments appearing first.
curl --request GET \
--url https://api.orq.ai/v2/deployments \
--header 'Authorization: Bearer <token>'{
"object": "list",
"data": [
{
"id": "3c90c3cc-0d44-4b50-8888-8dd25736052a",
"created": "<string>",
"updated": "<string>",
"key": "<string>",
"description": "<string>",
"prompt_config": {
"tools": [
{
"type": "function",
"function": {
"name": "<string>",
"parameters": {
"type": "object",
"properties": {},
"required": [
"<string>"
],
"additionalProperties": false
},
"description": "<string>",
"strict": true
},
"display_name": "<string>",
"id": 123
}
],
"model": "<string>",
"model_type": "chat",
"model_parameters": {
"temperature": 123,
"maxTokens": 123,
"topK": 123,
"topP": 123,
"frequencyPenalty": 123,
"presencePenalty": 123,
"numImages": 123,
"seed": 123,
"format": "url",
"dimensions": "<string>",
"quality": "<string>",
"style": "<string>",
"responseFormat": {
"type": "json_schema",
"json_schema": {
"name": "<string>",
"schema": {},
"description": "<string>",
"strict": true
},
"display_name": "<string>"
},
"photoRealVersion": "v1",
"encoding_format": "float",
"reasoningEffort": "none",
"budgetTokens": 123,
"verbosity": "low",
"thinkingLevel": "low"
},
"provider": "openai",
"messages": [
{
"role": "system",
"content": "<string>",
"tool_calls": [
{
"type": "function",
"function": {
"name": "<string>",
"arguments": "<string>"
},
"id": "<string>",
"index": 123
}
],
"tool_call_id": "<string>"
}
]
},
"version": "<string>"
}
],
"has_more": true
}Bearer authentication header of the form Bearer <token>, where <token> is your auth token.
A limit on the number of objects to be returned. Limit can range between 1 and 50, and the default is 10
1 <= x <= 50A cursor for use in pagination. starting_after is an object ID that defines your place in the list. For instance, if you make a list request and receive 20 objects, ending with 01JJ1HDHN79XAS7A01WB3HYSDB, your subsequent call can include after=01JJ1HDHN79XAS7A01WB3HYSDB in order to fetch the next page of the list.
A cursor for use in pagination. ending_before is an object ID that defines your place in the list. For instance, if you make a list request and receive 20 objects, starting with 01JJ1HDHN79XAS7A01WB3HYSDB, your subsequent call can include before=01JJ1HDHN79XAS7A01WB3HYSDB in order to fetch the previous page of the list.
List all deployments
list Show child attributes
Unique identifier for the object.
Date in ISO 8601 format at which the object was created.
Date in ISO 8601 format at which the object was last updated.
The deployment unique key
An arbitrary string attached to the object. Often useful for displaying to users.
Show child attributes
Show child attributes
The type of the tool. Currently, only function is supported.
function Show child attributes
The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
The parameters the functions accepts, described as a JSON Schema object.
Omitting parameters defines a function with an empty parameter list.
Show child attributes
object true, false A description of what the function does, used by the model to choose when and how to call the function.
The modality of the model
chat, completion, embedding, image, tts, stt, rerank, moderation, vision Model Parameters: Not all parameters apply to every model
Show child attributes
Only supported on chat and completion models.
Only supported on chat and completion models.
Only supported on chat and completion models.
Only supported on chat and completion models.
Only supported on chat and completion models.
Only supported on chat and completion models.
Only supported on image models.
Best effort deterministic seed for the model. Currently only OpenAI models support these
Only supported on image models.
url, b64_json, text, json_object Only supported on image models.
Only supported on image models.
Only supported on image models.
An object specifying the format that the model must output.
Setting to { "type": "json_schema", "json_schema": {...} } enables Structured Outputs which ensures the model will match your supplied JSON schema
Setting to { "type": "json_object" } enables JSON mode, which ensures the message the model generates is valid JSON.
Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.
Show child attributes
json_schema Show child attributes
The version of photoReal to use. Must be v1 or v2. Only available for leonardoai provider
v1, v2 The format to return the embeddings
float, base64 Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.
none, disable, minimal, low, medium, high Gives the model enhanced reasoning capabilities for complex tasks. A value of 0 disables thinking. The minimum budget tokens for thinking are 1024. The Budget Tokens should never exceed the Max Tokens parameter. Only supported by Anthropic
Controls the verbosity of the model output.
low, medium, high The level of thinking to use for the model. Only supported by Google AI
low, high openai, groq, cohere, azure, aws, google, google-ai, huggingface, togetherai, perplexity, anthropic, leonardoai, fal, nvidia, jina, elevenlabs, litellm, cerebras, openailike, bytedance, mistral, deepseek, contextualai, moonshotai Show child attributes
The role of the prompt message
system, assistant, user, exception, tool, prompt, correction, expected_output The contents of the user message. Either the text content of the message or an array of content parts with a defined type, each can be of type text or image_url when passing in images. You can pass multiple images by adding multiple image_url content parts. Can be null for tool messages in certain scenarios.
Show child attributes
function THe version of the deployment
Was this page helpful?