OpenAI truncating the response

Hi

i am using Chat gpt 4o model . I have a token limit of 900K per min and like 4k+ request limit

I have a filtered dataset i am feeding into the OpenAI using max token of 15000, along with prompt instructions and teh user NLP . I am well within my limit. But my response is not complete . Feels like around 4k + .

I cannot chunk and get results back, as it would be really slow and also i have to send again top teh Open ai LLM , the combined result in the end has to average or max value. it could be anything. I dont know how to increase the response length from the LLM