Using Vision in a Prompt
In the Playgrounds, you can choose a Vision model to quickly test hypotheses and prompts with Vision.
To get started make sure you are familiar with Prompts, see Creating a Prompt.
Selecting a Vision Model in the Playground
To select a model that is compatible with Vision, use a model that has a vision
tag next to its name:

The vision
label means that the model is able to interpret images
Using Vision in the Playground
You can use Vision models just like any other model in the Playground.
To include an image as an input for your model, click on the image icon at the top-right of your message. You will then be able to share a link or upload an image to be sent to the model.

An example use case using a vision model
Using Vision through code
You can use vision models though our API and SDK.
curl --request POST \
--url https://api.orq.ai/v2/deployments/invoke \
--header 'accept: application/json' \
--header 'content-type: application/json' \
--data '
{
"messages": [
{
"role": "user",
"content": "describe what you see in this image"
},
{
"content": [
{
"type": "image_url",
"image_url": {
"url": "Either a URL of the image or the base64 encoded image data."
}
}
]
}
]
}
'
Updated 3 days ago