I developed a wordpress plugin that chunks and process large amount of text using the assistant system and rebuilds the full text afterwards.
I made it in a way that I get as close as possible to the 4096 tokens per assistant answer.
Everything was working perfectly until a few days ago, where I started to see the process automatically stopping after 300 seconds (and restarting automatically, thus burning my OpenAI credit…)
So there are 2 options:
Open AI introduced that 300 seconds limit from the first api call made by a function to the retrieval of the final answer (status: completed).
That 300 sec limit is on my side (but no idea what it can be, as my php max exec time was 360 (bumped it to 600, no change), and OpenAI massively slowed down the speed of GPT 4 turbo (at least by 30%, from my tests).
Did any of you notice something similar? It’s quite annoying, as this means GPT 4 turbo simply cannot process a large amount of text, even if chuncked and fed to various assistants.
In addition, it also seems a bit less good at doing its job with text, sometimes being super lazy, replacing things with <------ the list of links goes here----->.
But that might just be me being pissed off at him and judging harshly