Another went off talking about Disneyland and Hong Kong instead of titling.
Reproducible easiest by asking for foreign language, but the asking in English:
Or I just talk to it: This will delete Of course! I’m here to help. What questions do you have?.
This might be why people are thinking they are hacked. Like our gpt-3.5-turbo that can’t follow a single instruction, their AI can’t follow a single instruction, either?
I noticed this a day or two ago.
“I understand that you would like to create a title for the conversation in 1-4 words, in the same language as the conversation”. The title summary prompt must have changed in preparation for multilingual support in ChatGPT.
From what I can see, there are at least two meta-cognitive aspects of the summary prompt:
1.) “create a title for the conversation” - we already know the model does this quite well.
2.) 'the same language as the conversation ’ - this new avenue of observation we’re asking the model for seems to be challenging enough that it loses scope of the original task.
It’s not hard to imagine the jump in task complexity when the model is asked to observe the content of the user request as well as identify the language of the request at the same time.
This problem is likely solved by breaking the summary prompt up into two, three or four separate requests, e.g. “is this an English conversation?”, “Identify the language used to conduct this conversation”, " Provide a very concise summary of this conversation", “translate [summary] into [conversation language]”.
The prompt of the title summarizer AI is similar as I’ve got it to expose before by non-accident.
There’s many and deep Reddit threads where people are concerned and entertained by this failing of the ChatGPT titles.
Bad AI or failing implementation? Try a similar task writing for gpt-3.5-turbo and you likely discover it also can’t follow instructions well as just days ago, or keep its mind out of the data.