GPT-4 API: Continue conversation

Hi folks,
In GPT-4 playground, It is possible to “continue” text generation by simply providing “continue” as additional user prompt if generation stops.

But I could not figure out how to do the same with API using Python:

  1. I initiate the generation with both system+user prompts
  2. system start generating text
  3. It stops at a certain point for long generations
  4. I got the generation status with “finish_reason”
  5. I try to feed “continue” prompt with additional API calls ONLY using user prompt=“continue”
  6. But system simply does not continue to generate

I guess a new API call means starting a new chat, so system loses all its previous context.

I am sure there is a predetermined way to preserve previous conversations/context and continue from where it left off using API/Python, but I simply could not figure it out.

Any insight would be highly appreciated.


1 Like

I’m not sure how the playground works, but I suspect that it just sends the current text as the prompt.

So if you want to replicate it, just resend your previous prompt with the work continue :-).

The other option is to use the chat rather than the completion interface, and then just add another entry to the array with the word continue.

This is all assuming that it didn’t get chopped because of the 8000 token limit of GPT4

Thanks for reply dror…

Playground does not work like API, it simply preserves previous conversations somehow. But I could not find any way to continue the conversation with API without new API call. And any new API call starts a new chat without any previous context.

I could not understand what you meant with “So if you want to replicate it, just resend your previous prompt with the work continue”

Sorry I wasn’t clear.

Using completion API.
request: tell me about Paris.
response: Paris is … Eiffel tower.
request2: “Tell me about Paris. Paris is … Eiffel tower. Continue.” or tell me more.

The cleaner approach is to use the Chat API:OpenAI API
In that case, you just append

{"role": "user", "content": "Continue!"}

to your previous request and GPT’s response.
So it’d look like this

{"role": "user", "content": "Tell me about Paris."}
{"role": "assistant", "content": " Paris is ... Eiffel tower."}
{"role": "user", "content": "Continue!"}
1 Like

does that mean (when using API) to feed ALL previous (requests +responses) history as a new request (prompt) to keep the context?

As long as you want to keep the same thread, yes.
If you want to experiment with this, I have a project

where by default requests don’t keep context, but you can use “chat” or “-c” to keep it. The docs explain it in more details.
If you’re a typescript person, you can even steal my code :slight_smile:

@dror, thats appreciated, I will study your project, thank you :raised_hands:

You can reach out to my repo:

Looks like that UI has supported continue :smile: and I just tested on Simple Chinese. And yes ChatCompletion() using three roles also supports use ‘continue’ to keep update contents.

Why is ChatGPT not providing a parameter for eg. conversation id as they do have on their website? Passing conversation history every time is not beneficial. The responses might be too long. I know they have mentioned that the model does not have memory, but they could manage the conversion in the APIs as they do on their website.


Hi Guys, any update on this? It would be awesome if we could keep the conversation ID and maintain the conversation flow in with ChatGPT’s API.