Image generation / edit API time out with gpt-image-1

The API is preposterously slow. 44 seconds with a minimal request and two small jpg.

stream = client.images.edit(
    model="gpt-image-1",
    prompt=prompt,
    image=open_files,          # pass the list directly
    size="1024x1024",
    quality="low",
    stream=True,               # Enable streaming
    # partial_images=2,
    # input_fidelity="high"
)

Crank it up to 1536x1024, high quality, 3 partials , and input_fidelity…130 seconds. So using your AI script (that I couldn’t help but tweak line-by-line), my org can get a hair faster with the only difference being the input images.

Picture; input of photos doesn't do much for 'disney style'

There were no partial events captured and I added an else to capture any uncaught events; this parameter was stealthed into the dox only, so I think it is another non-working product like VAD chunking in transcripts. I haven’t tried stream previously, as the only thing it offers is paying for what can be a free progress GIF.

Non-SDK script for you in another topic, since you have a Python environment, and you can crank the 240 second timeout I provide even higher (time.time() returns epoch seconds in Python, btw):

That script not being wrapped in a main(), if you run it in a notebook or REPL environment, you can continue inspecting the global variables.

If that fails too, dropping the network connect, but the code still runs to report errors? You’d contact the hosting service provider and tell them you need non-active open network connections to have 1000 seconds + before closing on you. Or run your own VM.