Custom GPT - Processing API Calls in batches

I am trying to create custom GPT in GPT enterprise version for my company in which i am trying to implement this feature:

User request for analysis on documents from backend.
API Call is made to my backend python program hosted on private Azure.
Data fetched is used on the analysis. *Have done custom instruction on how to do analysis.

Issue is the context window for Custom GPT. I want to implement a instruction in a way that API calls are made in the backend with slight delay and information retrieved from documents are processed in batches. Here is how i am trying to do it via instruction:

  • Complete Document Analysis across multiple Vendors:
    • Retrieve vendor contracts in batches to stay within the token limit.
    • Process each batch, analyze it according to the Analysis Type, store intermediate results in memory, and then remove the processed data.
    • Introduce a delay of three seconds between each batch processing to manage token usage and ensure thorough analysis.
    • Once all batches are processed, compile and provide a comprehensive summary of the analysis, including all alignment, discrepancies, and recommendations in a single combined table format.

Has anyone done something like this. Please dont suggest Assistant API as it requires setting up frontend/backend logics and is not cheap…


  • How to implement this lagging in the API schema ?
  • Does Custom GPT clear its processing data based on instructions ?

Would appreciate meaningful inputs !

I don’t think you will be able to do this ‘straight’ from your GPT.
You should consider creating a Python or React app that acts as a ‘wrapper’/proxy for those calls. So instead of calling you ‘vendor’ api you call you app. From there it is also much easier to do things like caching (same vendor request … return previously retrieved data) and deal with rate limits.
In you backend you can also use the Assistants api to do some of the processing that then might be served to the custom gpt’s.