curl --request POST \
--url https://api.orq.ai/v2/gateway/images/generations \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data '
{
"prompt": "<string>",
"model": "<string>",
"background": "transparent",
"moderation": "low",
"n": 1,
"output_compression": 50,
"output_format": "png",
"quality": "auto",
"response_format": "url",
"size": "<string>",
"style": "vivid",
"orq": {
"name": "<string>",
"fallbacks": [
{
"model": "openai/gpt-4o-mini"
}
],
"cache": {
"type": "exact_match",
"ttl": 3600
},
"retry": {
"count": 3,
"on_codes": [
429,
500,
502,
503,
504
]
},
"contact": {
"id": "contact_01ARZ3NDEKTSV4RRFFQ69G5FAV",
"display_name": "Jane Doe",
"email": "[email protected]",
"metadata": [
{
"department": "Engineering",
"role": "Senior Developer"
}
],
"logo_url": "https://example.com/avatars/jane-doe.jpg",
"tags": [
"hr",
"engineering"
]
},
"load_balancer": [
{
"model": "openai/gpt-4o",
"weight": 0.7
},
{
"model": "anthropic/claude-3-5-sonnet",
"weight": 0.3
}
],
"timeout": {
"call_timeout": 30000
}
}
}
'{
"created": 123,
"data": [
{
"revised_prompt": "<string>",
"b64_json": "<string>",
"url": "<string>"
}
],
"usage": {
"input_tokens_details": {
"image_tokens": 123,
"text_tokens": 123
},
"input_tokens": 123,
"output_tokens": 123,
"total_tokens": 123
}
}Create an Image
curl --request POST \
--url https://api.orq.ai/v2/gateway/images/generations \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data '
{
"prompt": "<string>",
"model": "<string>",
"background": "transparent",
"moderation": "low",
"n": 1,
"output_compression": 50,
"output_format": "png",
"quality": "auto",
"response_format": "url",
"size": "<string>",
"style": "vivid",
"orq": {
"name": "<string>",
"fallbacks": [
{
"model": "openai/gpt-4o-mini"
}
],
"cache": {
"type": "exact_match",
"ttl": 3600
},
"retry": {
"count": 3,
"on_codes": [
429,
500,
502,
503,
504
]
},
"contact": {
"id": "contact_01ARZ3NDEKTSV4RRFFQ69G5FAV",
"display_name": "Jane Doe",
"email": "[email protected]",
"metadata": [
{
"department": "Engineering",
"role": "Senior Developer"
}
],
"logo_url": "https://example.com/avatars/jane-doe.jpg",
"tags": [
"hr",
"engineering"
]
},
"load_balancer": [
{
"model": "openai/gpt-4o",
"weight": 0.7
},
{
"model": "anthropic/claude-3-5-sonnet",
"weight": 0.3
}
],
"timeout": {
"call_timeout": 30000
}
}
}
'{
"created": 123,
"data": [
{
"revised_prompt": "<string>",
"b64_json": "<string>",
"url": "<string>"
}
],
"usage": {
"input_tokens_details": {
"image_tokens": 123,
"text_tokens": 123
},
"input_tokens": 123,
"output_tokens": 123,
"total_tokens": 123
}
}Bearer authentication header of the form Bearer <token>, where <token> is your auth token.
input
A text description of the desired image(s).
The model to use for image generation. One of openai/dall-e-2, openai/dall-e-3, or openai/gpt-image-1.
Allows to set transparency for the background of the generated image(s). This parameter is only supported for openai/gpt-image-1.
transparent, opaque, auto Control the content-moderation level for images generated by gpt-image-1. Must be either low or auto.
low, auto The number of images to generate. Must be between 1 and 10. For dall-e-3, only n=1 is supported.
1 <= x <= 10The compression level (0-100%) for the generated images. This parameter is only supported for gpt-image-1 with the webp or jpeg output formats.
0 <= x <= 100The format in which the generated images are returned. This parameter is only supported for openai/gpt-image-1.
png, jpeg, webp The quality of the image that will be generated. auto will automatically select the best quality for the given model.
auto, high, medium, low, hd, standard The format in which generated images are returned. Must be one of url or b64_json. This parameter isn't supported for gpt-image-1 which will always return base64-encoded images.
url, b64_json The size of the generated images. Must be one of the specified sizes for each model.
The style of the generated images. This parameter is only supported for openai/dall-e-3. Must be one of vivid or natural.
vivid, natural Show child attributes
The name to display on the trace. If not specified, the default system name will be used.
Retry configuration for the request
Information about the contact making the request. If the contact does not exist, it will be created automatically.
Show child attributes
Unique identifier for the contact
"contact_01ARZ3NDEKTSV4RRFFQ69G5FAV"
Display name of the contact
"Jane Doe"
Email address of the contact
URL to the contact's avatar or logo
"https://example.com/avatars/jane-doe.jpg"
A list of tags associated with the contact
["hr", "engineering"]
Array of models with weights for load balancing requests
[
{ "model": "openai/gpt-4o", "weight": 0.7 },
{
"model": "anthropic/claude-3-5-sonnet",
"weight": 0.3
}
]
Represents an image generation response from the API.
The Unix timestamp (in seconds) of when the image was created.
Represents the url or the content of an image generated.
Show child attributes
The prompt that was used to generate the image, if there was any revision to the prompt.
The base64-encoded JSON of the generated image, if response_format is b64_json
The url of the generated image, if response_format is url (default)
Was this page helpful?