Chained Prompt to complete text larger than 4000 tokens?

We want to use OpenAI (Text Completion) API to handle large document (100 pages for example) to answer multiple questions based on the document. Since there are 4000-token limitation, we will need to split the whole text into smaller chunks and send to OpenAI API. But we do not want to do summarization for each chunk since it can likely lose details that our later questions may need. The question is, can OpenAI API handle “chained prompt”, meaning maintaining the chucks we send into a context and answer our questions based on what we send previously as a whole context? To accomplish that, we also need ‘session’, so one document will not interfere with another.



plus 1, having similar use case to resolve.

As far as I know, there’s no way to “chain prompts” together (ETA: With the basic API…) What you send in the prompt is what is used to generate the completion.

Hope this helps.

We have a way to do this (We are chaining between 20 and 100 prompts to refine and answer). However, it is not cheap to do and the use case would need to justify it

Which problem are you trying to resolve

a) Are you trying to get a longer output
b) Are you trying to refer to more of the document
c) Something else

The solution for each of these issues are not the same

Send me a private message if you want to chat offline

1 Like

Longer input… In other words longer context.

1 Like

we are facing a similar issue.

we need way more context and it is impacting the size of the output.

we were trying fine tuning to give it context, but that did not work.

an amazing use case is it to analyze large datasets, because generally this AI is very smart, and it can really summarize and find patters on large datasets if openai will let us.

1 Like

+1 exactly what @adamrg72 said

data → open AI → Summary

@all seems like the issue is now resolved with gpt 3.5 and 4? Where we can provide context? any comments?