Vision API: Where to insert the “High / Low” Detail Parameter?
The documentation talks from a Low/High Detail Parameter.
But there is no Sample how or where it needs to be specified,.
## Low or high fidelity image understanding
By controlling the detail
parameter, which has two options, low
or high
, you have control over how the model processes the image and generates its textual understanding.
2 Likes
I have exactly the same problem ! If anyone has a solution, we’re interested! Especially since the costs seem drastically lower with the “low” mode, we should see if this does not crush the performance too much.
It’s under the “image_url” key.
In the same scope as “url” key.
payload = {
"model": "gpt-4-vision-preview",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": f"{prompt}"
},
{
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{base64_image}",
"detail": "low"
}
}
]
}
],
"max_tokens": 300
}
Thanks for sharing.
Can you tell me something about the minimum resolution for the uploaded images?
To me looks that small images can not be interpreted.