Image generation take longer than before.... why?

Hello guys, I have a project that uses dall-e 3 that give me 4 images and I have been using it for quite some time and with good generation times and final results, but about 1 week ago, now the prompts response takes twice as long. Before the responses took between 18 and 25 seconds, now the same takes between 32 and 1 minute or more.

Is there any way to know what is happening or any solution to improve the times with the API - Dall-e 3 ?

Prompts with more text take longer to generate ?

This is likely because GPT-4 takes longer to process longer strings of information before passing it onto DALL-E.

The only thing you could do is check your backend/frontend and make sure that nothing on your end is slowing it down.

Thanks for replying, the truth is that my backend always worked faster for me until recently when it started to work slower. It is taking almost twice as long as it used to.

Now I have a doubt, when using the API to generate Dall-e 3, the prompts are still processed by chat GPT ?

I used the API to generate 1 image. It took 36 seconds. It took ChatGPT 18 seconds to generate an image with the same prompt.

You’ll get back a revised_prompt with your DALLE3 API call that shows how your prompt was changed if it was changed…

It’s likely just DALLE3 being used in ChatGPT Plus and Bing and elsewhere for free, and it’s a strain on the system. They’re improving infrastructure, so it’s just an awkward growing stage, hopefully. Hang tight.

So let’s hope it improves and hopefully it won’t take too long since the platform was working better (fast) a few days ago.

Thank you.

I have noticed following what you tell me, that every time I send the same prompt to dall-e 3, ChatGPT interferes and generates an image with a reinterpreted prompt, which in many cases, when the prompt is modified by ChatGPT, the image does not comply with what is requested.

It would be interesting to be able to send the promptp to ChatGPT and that this returned prompt can be sent directly to Dall-e 3 avoiding that this new prompt is modified again.

I don’t see the need to reinterpret the prompt each time I send it if I eventually want to generate more images with the same prompt.

I think there are some things that should be given a twist.

But you can generate up to 50 images concurrently every 60 seconds with the api. Vs only 2 via GPT.

According to OpenAI API are at most “dall-e-3 → 15 images per minute” to Tier-4 and “50 images per minute” in Tier 5.

Anyway, this means that there are 30 seconds for each of the 50 generations per minute, since only 1 image can be received per request.

Consider that not all request take the same amount time to respond.