Charges per request not per image

I am currently using the image api to create images from text prompts.
I just checked how many images have been created and to my surprise it does not match the amount openai wants to charge me for using the api.

I have accumulated roughly 60USD which should have given me 1500 Images.

Instead I got more like between 300-400.

I now checked my activity and I can see that I made around 1500 requests today with my script.

Apparently there were rate limiting errors and content filtering errors despite my code sleeping after every image request (61s)

I’m actually in shock.

On the pricing page it is saying 0.040USD / Image

Not per request.

Have any of you encountered something similar?

I have already opened a ticket.

Rate limit errors should have no impact on your bill. That is you getting denied by the API endpoint without anything reaching the model.

The python openai library has built-in retry, and other methods like httpx and requests also have connection timeouts that don’t expect internet resources to take a long time. I would increase all its timeouts so your (their) code doesn’t give up on waiting for an image that’s being created.

Content filter I’d think should stop you at prompt, not generation. Billing just for an intermediary AI to say “no” seems shady if true.

Microsoft uses computer vision to block generations from being received, but I haven’t heard evidence or error message of this being done by OpenAI. Determined probing would allow one to determine if image generation is being completed, only to be then denied on the output’s contents.

I’ve encountered this behavior with ChatGPT, back when it still generated 4 image batches. On quite innocuous prompts some would occasionally fail with a content policy violation, others in the same batch would load fine. Seems to suggest that a post generation filter is in place.

I can imagine the API behaving similarly.

But this is pretty par for the course with OpenAI

If your generation fails half way through for whatever reason (even with chat) you still have to pay for it. And because you can’t continue a generation, you have to restart with the last chat message - so you pay your expensive generation tokens twice.

It’s wastage.