curl --request GET \
--url https://api.orq.ai/v2/prompts/{id} \
--header 'Authorization: Bearer <token>'{
"_id": "<string>",
"type": "prompt",
"owner": "<string>",
"domain_id": "3c90c3cc-0d44-4b50-8888-8dd25736052a",
"created": "<string>",
"updated": "<string>",
"display_name": "<string>",
"prompt_config": {
"messages": [
{
"role": "system",
"content": "<string>",
"tool_calls": [
{
"type": "function",
"function": {
"name": "<string>",
"arguments": "<string>"
},
"id": "<string>",
"index": 123
}
],
"tool_call_id": "<string>"
}
],
"stream": true,
"model": "<string>",
"model_db_id": "3c90c3cc-0d44-4b50-8888-8dd25736052a",
"model_type": "chat",
"model_parameters": {
"temperature": 123,
"maxTokens": 123,
"topK": 123,
"topP": 123,
"frequencyPenalty": 123,
"presencePenalty": 123,
"numImages": 123,
"seed": 123,
"format": "url",
"dimensions": "<string>",
"quality": "<string>",
"style": "<string>",
"responseFormat": {
"type": "json_schema",
"json_schema": {
"name": "<string>",
"schema": {},
"description": "<string>",
"strict": true
},
"display_name": "<string>"
},
"photoRealVersion": "v1",
"encoding_format": "float",
"reasoningEffort": "none",
"budgetTokens": 123,
"verbosity": "low",
"thinkingLevel": "low"
},
"provider": "openai",
"integration_id": "3c90c3cc-0d44-4b50-8888-8dd25736052a",
"version": "<string>"
},
"created_by_id": "3c90c3cc-0d44-4b50-8888-8dd25736052a",
"updated_by_id": "3c90c3cc-0d44-4b50-8888-8dd25736052a",
"description": "<string>",
"metadata": {
"language": "English"
}
}Retrieves a prompt object
curl --request GET \
--url https://api.orq.ai/v2/prompts/{id} \
--header 'Authorization: Bearer <token>'{
"_id": "<string>",
"type": "prompt",
"owner": "<string>",
"domain_id": "3c90c3cc-0d44-4b50-8888-8dd25736052a",
"created": "<string>",
"updated": "<string>",
"display_name": "<string>",
"prompt_config": {
"messages": [
{
"role": "system",
"content": "<string>",
"tool_calls": [
{
"type": "function",
"function": {
"name": "<string>",
"arguments": "<string>"
},
"id": "<string>",
"index": 123
}
],
"tool_call_id": "<string>"
}
],
"stream": true,
"model": "<string>",
"model_db_id": "3c90c3cc-0d44-4b50-8888-8dd25736052a",
"model_type": "chat",
"model_parameters": {
"temperature": 123,
"maxTokens": 123,
"topK": 123,
"topP": 123,
"frequencyPenalty": 123,
"presencePenalty": 123,
"numImages": 123,
"seed": 123,
"format": "url",
"dimensions": "<string>",
"quality": "<string>",
"style": "<string>",
"responseFormat": {
"type": "json_schema",
"json_schema": {
"name": "<string>",
"schema": {},
"description": "<string>",
"strict": true
},
"display_name": "<string>"
},
"photoRealVersion": "v1",
"encoding_format": "float",
"reasoningEffort": "none",
"budgetTokens": 123,
"verbosity": "low",
"thinkingLevel": "low"
},
"provider": "openai",
"integration_id": "3c90c3cc-0d44-4b50-8888-8dd25736052a",
"version": "<string>"
},
"created_by_id": "3c90c3cc-0d44-4b50-8888-8dd25736052a",
"updated_by_id": "3c90c3cc-0d44-4b50-8888-8dd25736052a",
"description": "<string>",
"metadata": {
"language": "English"
}
}Bearer authentication header of the form Bearer <token>, where <token> is your auth token.
Unique identifier of the prompt
Prompt retrieved.
A prompt entity with configuration, metadata, and versioning.
prompt The prompt’s name, meant to be displayable in the UI.
128A list of messages compatible with the openAI schema
Show child attributes
Show child attributes
The role of the prompt message
system, assistant, user, exception, tool, prompt, correction, expected_output The contents of the user message. Either the text content of the message or an array of content parts with a defined type, each can be of type text or image_url when passing in images. You can pass multiple images by adding multiple image_url content parts. Can be null for tool messages in certain scenarios.
Show child attributes
function The id of the resource
The modality of the model
chat, completion, embedding, image, tts, stt, rerank, moderation, vision Model Parameters: Not all parameters apply to every model
Show child attributes
Only supported on chat and completion models.
Only supported on chat and completion models.
Only supported on chat and completion models.
Only supported on chat and completion models.
Only supported on chat and completion models.
Only supported on chat and completion models.
Only supported on image models.
Best effort deterministic seed for the model. Currently only OpenAI models support these
Only supported on image models.
url, b64_json, text, json_object Only supported on image models.
Only supported on image models.
Only supported on image models.
An object specifying the format that the model must output.
Setting to { "type": "json_schema", "json_schema": {...} } enables Structured Outputs which ensures the model will match your supplied JSON schema
Setting to { "type": "json_object" } enables JSON mode, which ensures the message the model generates is valid JSON.
Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.
Show child attributes
json_schema Show child attributes
The version of photoReal to use. Must be v1 or v2. Only available for leonardoai provider
v1, v2 The format to return the embeddings
float, base64 Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.
none, disable, minimal, low, medium, high Gives the model enhanced reasoning capabilities for complex tasks. A value of 0 disables thinking. The minimum budget tokens for thinking are 1024. The Budget Tokens should never exceed the Max Tokens parameter. Only supported by Anthropic
Controls the verbosity of the model output.
low, medium, high The level of thinking to use for the model. Only supported by Google AI
low, high openai, groq, cohere, azure, aws, google, google-ai, huggingface, togetherai, perplexity, anthropic, leonardoai, fal, nvidia, jina, elevenlabs, litellm, cerebras, openailike, bytedance, mistral, deepseek, contextualai, moonshotai The ID of the integration to use
The prompt’s description, meant to be displayable in the UI. Use this field to optionally store a long form explanation of the prompt for your own purpose
Show child attributes
A list of use cases that the prompt is meant to be used for. Use this field to categorize the prompt for your own purpose
Agents simulations, Agents, API interaction, Autonomous Agents, Chatbots, Classification, Code understanding, Code writing, Conversation, Documents QA, Evaluation, Extraction, Multi-modal, Self-checking, Sentiment analysis, SQL, Summarization, Tagging, Translation (document), Translation (sentences) The language that the prompt is written in. Use this field to categorize the prompt for your own purpose
Chinese, Dutch, English, French, German, Russian, Spanish Was this page helpful?