The API call to https://api.openai.com/v1/chat/completions returns token usage information in the response, as provided below:
“usage”: { “prompt_tokens”: 13, “completion_tokens”: 8, “total_tokens”: 21 }
However, the vision API (https://api.openai.com/v1/images/generations) for image generation, there’s no similar usage information present in the response.
Our application stores usage information in its database for the purpose of calculating the cost of each API call. How can the usage (token) information be fetched/calculated for vision API calls?
Welcome to the dev community!
Images are cost per image…
Model |
Quality |
Resolution |
Price |
DALL·E 3 |
Standard |
1024×1024 |
$0.040 / image |
|
Standard |
1024×1792, 1792×1024 |
$0.080 / image |
DALL·E 3 |
HD |
1024×1024 |
$0.080 / image |
|
HD |
1024×1792, 1792×1024 |
$0.120 / image |
DALL·E 2 |
|
1024×1024 |
$0.020 / image |
|
|
512×512 |
$0.018 / image |
|
|
256×256 |
$0.016 / image |
Scroll down on pricing page…
3 Likes
Hi Paul,
Thank you for the response.
So, if I understand correctly, we would need to implement logic in our code to calculate the cost based on the image size. However, I’m wondering if there’s any chance this information could be provided in the response from the API itself?
1 Like
You can get the revised_prompt back, but since there’s no tokens, there’s really no need to pass it along? If you’re sending a specific request to the API, you know the size and hence the cost too?
Here’s what you get back…
https://platform.openai.com/docs/api-reference/images/object
1 Like