Also have you checked max tokens? Make sure it is high enough. And markdown takes less token as well so better for longer pdf’s
We don’t set max tokens for any of the models - the default is the max limit. Didn’t know HTML stresses LLMs. I think that OpenAI datacenters are already stressed as MS indicated today on their earnings call
The bottom line is, when File inputs was released for gpt-4.1, we jumped on it as it was a good fit for us. We tested a bunch and had no issues until now. It seems that LLM prompt acceptance is a bit sensitive now. Besides, we are not the only ones having problems with this.
We have taken this out of production until it becomes stable - maybe when Sam is allocated more NVIDIA Blackwell chips.
@OpenAI_Support is there an ETA or update on this issue? been getting radio silence for almost two weeks ago.
Hi Jackie, I’ve been looking into this. It has to do with project storage size. This hopefully should be fixed EOW. If you have any recent request id’s to share that would be awesome!
Hey Gireesh, gotcha! Here’s a recent response_id that failed: resp_681e6cd5704c8191895b4c0c51b09e27037ea14e8cc1888a
One way is to convert PDF into Word Doc and then use the doc.
This is definitely not solved yet. Sometimes it mysteriously works, but in most cases it doesn’t. Tested it with Chat Completions API and Responses API: o4-mini, o3 and GPT-4o.
@OpenAI_Support @Gireesh_Mahajan still having this issue did it end up being patched over the weekend?