Hi folks,
In GPT-4 playground, It is possible to “continue” text generation by simply providing “continue” as additional user prompt if generation stops.
But I could not figure out how to do the same with API using Python:
I initiate the generation with both system+user prompts
system start generating text
It stops at a certain point for long generations
I got the generation status with “finish_reason”
I try to feed “continue” prompt with additional API calls ONLY using user prompt=“continue”
But system simply does not continue to generate
I guess a new API call means starting a new chat, so system loses all its previous context.
I am sure there is a predetermined way to preserve previous conversations/context and continue from where it left off using API/Python, but I simply could not figure it out.
Playground does not work like API, it simply preserves previous conversations somehow. But I could not find any way to continue the conversation with API without new API call. And any new API call starts a new chat without any previous context.
I could not understand what you meant with “So if you want to replicate it, just resend your previous prompt with the work continue”
Using completion API.
request: tell me about Paris.
response: Paris is … Eiffel tower.
request2: “Tell me about Paris. Paris is … Eiffel tower. Continue.” or tell me more.
The cleaner approach is to use the Chat API:OpenAI API
In that case, you just append
{"role": "user", "content": "Continue!"}
to your previous request and GPT’s response.
So it’d look like this
{"role": "user", "content": "Tell me about Paris."}
{"role": "assistant", "content": " Paris is ... Eiffel tower."}
{"role": "user", "content": "Continue!"}
As long as you want to keep the same thread, yes.
If you want to experiment with this, I have a project
where by default requests don’t keep context, but you can use “chat” or “-c” to keep it. The docs explain it in more details.
If you’re a typescript person, you can even steal my code
Looks like that UI has supported continue and I just tested on Simple Chinese. And yes ChatCompletion() using three roles also supports use ‘continue’ to keep update contents.
Why is ChatGPT not providing a parameter for eg. conversation id as they do have on their website? Passing conversation history every time is not beneficial. The responses might be too long. I know they have mentioned that the model does not have memory, but they could manage the conversion in the APIs as they do on their website.