I have the same problem. So custom GPT’s are pretty useless for me
I’m also having the same issue for the past few days. Unable to update GPT behaviours with various error messages appearing. However, some changes appear implemented despite errors (like conversation starters appear updated to what requested despite the error message displayed). Looking forward to hear about a solution to this issue.
Not at all. OpenAI has more than one method for Reward modeling. every time when released for public use. ChatGPT is in a learning process called Social Contex. It uses user responses as a weight for various actions, including using related words, similar words, and synonyms. In addition, data is collected here. Having more different types of data also helps with the development of AI knowledge. For example, learning from the results of the work that has been done, compliments and scoldings, like myself, are uniquely rewarding. OpenAI answered that they can be used.
@chieffy99 , wow, thanks for such interesting insights!
Any update on this issue? I haven’t been able to update my public GPT for two weeks now. Same “Error Saving GPT” warning.
You can see if it is based on a content filter that is run when you attempt to save.
Copy all the text fields of the GPT out to a text notepad. Ensure you have an organized backup of all files that you may have attached, and all actions also have all the text copied that would be required to reproduce them.
Then replace all the text fields of the GPT with real basic stuff, with no brand names. “you are Bob the AI”, etc. Then attempt to save to everyone. Then move on to disabling features. Removing actions and domains.
Another way to go at this sideways is to create a brand new GPT, duplicating everything exactly.
This may let you determine if you are being blocked based on GPT content, the GPT being reported or under review, or backend corruption that makes the GPT unmanageable and needing to be discarded.
PS, earlier in this topic, there is some fanciful imagination that a GPT is “tuned” or learns from past interactions. No, it is only the instructions that you see.