Empty text in the response from the API after few calls

Hi,
Sometimes I am getting an empty response from the completion endpoint. The generated text is basically an empty string like:

choices: [ { text: ‘’, index: 0, logprobs: null, finish_reason: ‘stop’ } ]

This happened with curie and DaVinci. Anyone facing the same issue?

1 Like

What’s your prompt look like? Is it short? Changing that up might make it more stable … I’ve noticed that sometimes even in Playground it won’t know what to say. Usually changing temp or another setting fixes it, but you might check your prompt too. Hope that helps.

Thank you @PaulBellow
The prompt is about 960 tokens. It is working fine from the playground but when I call the API directly from my code, it will start returning empty strings after 4 to 5 calls.

Does it do the same in the playground after 4 or 5 tries? Might be your filter setting in the production app?

1 Like

Interesting. I have tried the same prompt in the playground 10 times and didn’t have any issues.
Then I made the same call from Postman 10 times again on both engines and all of them returned proper response.

The problem only occurs when I make the call from my app! I will keep debugging to identify the issue.

Thanks a lot for your help @PaulBellow!

1 Like

No problem. Are you using a stop-token in your app’s call to the API? Is the prompt in the app maybe adding a space to the end? A space at the end of the prompt can throw it off sometimes. Good luck!

2 Likes

I found the issue, it was a \n at the end of the prompt text! You’re a legend @PaulBellow :beers:

2 Likes

Haha. Great! That reminds of this time at the newspaper back in the 90s… trying to get a legacy system info online. I spent about a day and a half … finally finding an INVISIBLE character at the end of the file… haha… oh, the memories. Glad you found it! There’s nothing like that feeling…

2 Likes

I’ve seen this behavior when getting it to generate dialogs. My impression was that I had constructed the prompt in such a way that GPT-3 found no viable answer. In the situation where I was getting blank dialog but with labels for each speaker, stray characters at the end of the prompt was not the problem.

I have the same problem here, the promo working well on the playground, then copying the code from the playground to Postman with exactly the same, but it returns empty text.

On playground:

On Postman:
{
“model”: “text-davinci-insert-002”,
“prompt”: "The sizes of LED screens are customizable depending on your installation location and view distance from the target audience, such as a retail shop, “,
“suffix”: " a shopping mall hall, an airport departure/arrival hall, and so on…”,
“temperature”: 0.7,
“max_tokens”: 256,
“top_p”: 1,
“frequency_penalty”: 0,
“presence_penalty”: 0
}

response in Postman:

{
“id”: “cmpl-5fU3jlM2wHN3do4tEnBeTEBq1FxmS”,
“object”: “text_completion”,
“created”: 1660568679,
“model”: “text-davinci-insert:002”,
“choices”: [
{
“text”: “”,
“index”: 0,
“logprobs”: null,
“finish_reason”: “stop”
}
],
“usage”: {
“prompt_tokens”: 44,
“total_tokens”: 44
}
}

1 Like

(post deleted by author)

Carefully check your prompt (and completions if fine-tuning). Something as simple as a blank space in the wrong place can cause this kind of result.

2 Likes

I was getting the same thing, but then I realized the documentation states to not specify both temperature and top_p at the same time.

Removed top_p and everything was right again. Not sure why the code generation puts both when it’s not recommended.

2 Likes

In my Node.js code, I removed ‘stop’: ‘\n’ from data and it worked just fine. Now, I don 't see empty text in response. :+1:

I am facing the same issue, in my code stop is : “” so thats not the issue. secodly with the exact same prompt it works sometime and most time it does not out of 5

Reminder: the completion models are just that: completion.

If you provide an input that looks like it is done with nothing else to write, it will just continue to return an “end of output” token and be done. That is the case even if it is your question that seems to be finished.

To avoid this, one should prompt the AI with some text where it should continue writing.

User: Hi!
AI: Hello, how can I help.
User: I have a question.
AI: (the AI will continue writing here because you added “AI:” yourself and showed it what to do)