For some unknown reason all my CustomGPT’s stop following instructions. It is like they were normal chat sessions. I already reported the error but just posting in case someone else is experiencing the same issue or anyone knows something about this.
Hi @rtm1, that’s definitely not expected behavior. Custom GPTs shouldn’t suddenly ignore their own instructions.
First thing I’d try is opening one of the GPTs in the editor and making sure the instructions are still saved properly. Even just re-saving and starting a brand-new chat can sometimes reset odd behavior.
If that doesn’t change anything, try duplicating one GPT and testing the duplicate in a fresh chat. If the duplicate behaves normally, it’s likely something session-related rather than a broader issue.
If they’re all behaving like plain chat sessions even after that, it’s best to reach out to support@openai.com
so the team can take a closer look at your specific account configuration.
Yes, thanks for the repsonse.
I just tried all that stuff. I even tried shutting down my computer and reinitiating browser sessions, as well as trying in a different device with different wifi connection. It is very strange. I am in current conversation with support but I am trying to diversificate the issue in different channels due to the urgency of this matter for my company.
One thing you should not do: upload files and expect those to be “instructions”.
Instructions should be immediately revealed in the input box of GPTs-> mine, edit. So verify you didn’t do that.
Why that is suspect, a technique you might have used across GPTs: A failure of the eagerness of the AI to call “file_search” when there doesn’t need to be a file search based on the input can be by just change in the model behavior or OpenAI’s prompt. The files will also be treated as coming from a user, with “avoid behavior injection” being one of the goals then.
Let’s test a GPT to replicate and further the investigation:
![]()
And give it input that is ambigous:
The two girls and her little sister go back to mod and trod 1940’s zazou France, with period styles and fashion.
I think you can figure out from below that the GPT with “instructions” in its context field knows who it is internally instructed about.
