How to continue generation through api implementation

I found a very interesting function on the official website of chatgpt. When I generate a relatively long article, there is a continue generation that can help me continue to generate long content. I would like to know how to achieve this through the open api interface Function

The version I have implemented for my API driven chat renderer only works for code generation, it checks to see if the ``` triple backtick markdown sequence has been found in the return text (I do this via a streaming system, but it would be equally valid with all at once mode) if at the end of the message the # count of `` ` back ticks is an odd number then the responce is likley to be part way through a code segment and so an automatic “Truncated, please continue” user message is appended to the message list and the API called again. If the numebr is even then it can be assumed that anything important has finished. Clearly this system would not work for natural language only content, perhaps the finish_reason is not “stop” on replies that end unfinished, I’ve not checked.


First of all thank you very much for your prompt reply.
There may be some differences between the situation I described and your reply. The following is a positive sample diagram of chatgpt. I would like to know how to achieve similar functions through the OpenAPI api or other methods.

Yes, I understand. I am suggesting that you look through the API documentation OpenAI Platform and take a look at the “finish_reason” flag that gets set when data is read in from the endpoint

As you can see there is a finish reason of “length” if you find that finish reason then you know the there was more output to come, so you can now send a “please continue type prompt to get the rest”

3 Likes

Yes, that’s the answer I was looking for, thanks a lot!
I will experiment with the idea above.

Finish reason is stop even if the response is “truncated” so using finish reason will not work unfortunately.

1 Like