Since 2022 there’s been the issue of ChatGPT stopping in the middle of its response.
"But even with the game being officially shut down, Club "
and then nothing. Historically the workaround has been to ask the program to “continue”.
But what happens when you need a full response?
When you’re using the API, and relying on a full response, what can you do then?
You’re not there to monitor the responses, and need to trust that your program works properly.
Has anyone found any workarounds to this
This is part of API error checking, you can check for finish_reason responses being what you expect, you can stream your responses so you have a fast reacting system that can detect issues in seconds if there is a problem and request a retry.
You can request the AI give a certain sequence at the end of the response and check for that, lots of options.
Thank you for the swift response. I am glad to know I have options now.
Let’s break down your response
- Check for finish_reason responses being what you expect
- Do you know where I can go to learn more about this? I’ve never heard of it
- Stream your responses so you have a fast reacting system that can detect issues in seconds
- I’ve also never heard of this, would you mind elaborating? What does it mean to “stream your responses”?
I feel like so much of this forum is prompting each other to give us good information
I’ve been making the model output a little summary at the end of its conversation, so that’s how it knows it’s finished, but these methods seem waaay better.
Thanks @Foxalabs, those are great links. Gonna go learn me a book.
2 Likes
Thank you so much. As @matt0sai put it, I have some learning to do.
I finished reading up on both documentations, and was wondering:
Do you think checking the finish_reason will provide full coverage? Or does it not catch every instance of an incomplete response?
If possible it would be simplest to only check the finish_reason and not worry about streaming
1 Like
It’s certainly a great way to ensure you have received a message that finishes in a way you expect, i.e. with “stop” and not something else, so that is a big step forwards.
I will add that while that is great, you should also be looking at if the response has returned within some acceptable time period, the API does now include a timeout feature, so I think making use of that and the finish_reason check will give you a rounded solution, I would still say that streaming improves the latency of detection and can improve the overall UX.
1 Like