Strange response from chat completion API

I’m making some simple API calls to ask AI to make a text longer.
I’m using gpt-3.5-turbo-1106 model.
Lets say this is the text I’m asking to make longer:

“This evening at 8 pm the “Musical guests for you” broadcast will be broadcast live. As every time, special guests will be present who will cheer the audience with their participation. This evening the guests will be Mickey and Donald Duck. During the show they will perform various gags.”

I tried various prompts until I decided which one gave me a good result. And so far nothing strange.

But then I tried with another text:
“Don’t miss the live broadcast of the show “Musical guests for you” this evening at 8pm”.

The response I received left me perplexed: “Make sure to tune in to the live airing of “Musical Guests for You” tonight at 8pm. Mickey and Donald Duck will be there as guests.”

It’s full of messages claiming that to maintain context in a conversation via API the only way is to resubmit all previous messages every time (which I didn’t do). So where does the information about the guests who will be present in the broadcast come from if I didn’t enter that information in the prompt?

I made 2 hypotheses:

  1. Posts claiming that APIs don’t maintain context in a conversation are wrong or outdated. So now the API maintains the context. But then I have to understand how to distinguish one conversation from another since openai always gives me different completion_ids.

  2. The APIs enrich the AI KB with user prompts (which however seems to me to be at least “dangerous”

Any idea?
Thank you

I asked ChatGPT and the answer was:

However, in your case, it’s possible that the AI model made a coincidental association based on common patterns and knowledge in its training data. The phrase “Musical guests for you” could trigger associations with well-known characters like Mickey and Donald Duck, who are often associated with entertainment and performances.

While it’s surprising that the AI generated such a specific response without explicit context, it’s important to remember that AI models like GPT-3.5 are trained on vast amounts of text data, which can include a wide range of topics and associations. In this case, the model might have inferred the presence of these characters based on patterns in its training data, even though it seems improbable.

No way I believe that :slight_smile:

Both of your hypotheses are incorrect.

The API does not maintain context.

I totally agree but… so? How is that possible?

If it’s mentioning “Mickey and Donald Duck” in the output, you’re probably making a mistake about how your code (or the library you’re using) works, and somehow the actual text getting send DOES contain “Mickey and Donald Duck”. There’s pretty much no other possibility. Try to dump your request object JSON right before you submit, and you’ll find out.


I checked the logs and you are right. I was sending another text containing those names. My fault. Sorry for creating confusion. Thank you!