Prompt through GPT3.5 Turbo returns "undefined"

We’re developing a chat application using Dialogflow ES that sends prompts to a chatGPT 3.5 turbo model to have it respond instead of Dialogflow. In the prompt we’re having difficulty with, we’re asking it to give us a list of beliefs someone may have based on an upset they are experiencing. It is giving inconsistent results.

90% of the time it returns the type of response we are looking for. A list of 5 beliefs, usually a sentence long, formatted with HTML bullet points.

The other 10% it returns “undefined”. I don’t have access to the code. I’m writing the prompts and there is a developer working on things on his end. He believes it is content filtering. But I just looked at some docs and my interpretation is that it would still return a valid JSON object with at least some sort of text to display.

It also seems to work 100% of the time if I remove the HTML formatting. Or if I ask for a list of emotions instead of beliefs. Which doesn’t make sense given my current level of understanding.

Can anyone point me to what may be happening here? Specifically as to why I would be receiving “undefined”? Thanks.

Welcome to the developer forum!

I might be missing something, but “undefined” sounds like an error code from some other part of the system, and not an actual response, can you post the API calling code? and some logs of the prompt given and the completion received?

Thanks for your response! I got the developer to look at the finish reason and it’s returning “content_filter”. So that seems to be where our issue is. So we’re going to see if we can get it turned off through Azure as it may actually cause more issues for us elsewhere. And put in a temporary fix to regenerate the response if the finish reason is content filter. Thanks again!

id: “chatcmpl-7dnQoZpfUp41AwC7M3d6hKL8CxwOc”,
created: 1689719162,
choices: [
{
message: {
role: “assistant”,
content: undefined,
},
index: 0,
finishReason: “content_filter”,
delta: undefined,
},
],
usage: {
completionTokens: 94,
promptTokens: 253,
totalTokens: 347,
},
}

3 Likes

Hey, just a heads up about something that will help others more easily help you.

When formatting code, wrapping the code in “fences” (three backticks) will maintain the code formatting.

So, something like,

```
{
id: “chatcmpl-7dnQoZpfUp41AwC7M3d6hKL8CxwOc”,
created: 1689719162,
choices: [
  {
    message: {
      role: “assistant”,
      content: undefined,
    },
    index: 0,
    Finish Reason: “content_filter”,
    delta: undefined,
  },
],
usage: {
  completionTokens: 94,
  promptTokens: 253,
  totalTokens: 347,
},
}
```

Becomes,

{
id: “chatcmpl-7dnQoZpfUp41AwC7M3d6hKL8CxwOc”,
created: 1689719162,
choices: [
  {
    message: {
      role: “assistant”,
      content: undefined,
    },
    index: 0,
    Finish Reason: “content_filter”,
    delta: undefined,
  },
],
usage: {
  completionTokens: 94,
  promptTokens: 253,
  totalTokens: 347,
},
}
2 Likes