Models sometimes return gibberish

We had an issue with a customer where we often got gibberish responses from the model. We tried gpt-4, gpt-3.5-turbo, and even Azure OpenAI models. All the same…

I used Microsoft’s prompt flow to easily send batches of prompts (always the same prompt) to the model. Usually a run of 30. In those 30, there are always a few nonsense responses. We are seeing it in more than one app, all using different prompts.

Hope someone here can shed some light on this…


1 Like
  • \u00e9 corresponds to the character “é.”
  • \u00e1 represents the character “á.”

So you have " các champion", " América interrupt".

We get a better hint from the strange responses beginning with a space character. The AI might be deciding that instead of following your instructions, it should continue writing in the completion style of your input.


  • Enclosing any user data in a container with its own instructions:

    “input to be processed: [[[[ text of batched data ]]]]”

  • reduce the top_p parameter to 0.4 or below so very unlikely token production is eliminated.

  • user the correct prompting style for the model, of chat or completions.

You also could just have corrupted input data. Log the inputs and responses and see what produces these.

1 Like