I’ve been developing Custom GPTs for several months, and my experience has gone from impressive and exciting to disheartening and frustrating. This is really just a rant because I don’t expect anything to get better. It seems like OpenAi is consistently reducing the amount of compute and capabilities given to custom GPTs. This results in an intolerable level of consistency and performance. I have several GPTs that I’ve been on the edge of making public only to backtrack after performing final testing and refinement. These GPTs used to be 90% ish reliable and performant but are now 30-50%, even after simplifying rather than enhancing their system instructions. I will probably switch efforts to more heavily using the API, but the loss of time and money is depressing. I got locked into a year long subscription for a ChatGPT Team account that seem like a total waste, at this point.
I reported a similar issue and I was told that they are aware and they are working towards a solution.
I support your rant and share in your frustration. In the early months our GPT’s were able to perform ANYTHING we asked. Since the release it has become watered down and the trending ones are no sh** ideas that barely work as well as those with 100-1000 uses. @SamAltman can you bring back the way it was around release? Nah ? All good…
I am sure they will fix the problem. I have trust in the team at OpenAI. Let’s be patient.
Check out my profile stickied topic. After some troubleshooting I was able to fix. Hope it helps!
Thanks for your response. Your troubleshooting guide looks like a good resource to use following periodic system issues. However the issue I am experiencing is a persistent and universal (across all of my custom GPTs) reduction in reliability and performance. It is very clear to me that ChatGPT and more specifically custom GPTs have been receiving less compute since mid to late January. OpenAI’s focus is not on its currently released products or customers. They have limited compute available and are trying to juggle it while keeping customers minimally appeased. I’m not wasting time being gaslit any longer. The lack of visibility into the product and services we are paying for is not going to improve.
The change is caused by
- Policy: after the launch of GPTs, OpenAI encountered problems from many aspects of society, such as a lawsuit from NY, causing the need to change AI behavior.
- Real time learning. Currently, GPT is widely available. Use learning from RLHF to quickly learn various behaviors. But there are disadvantages of learning this way. Such as not following specific usage instructions.
I was mail these issues to OpenAI and they currently working on a solution.
I don’t know how OpenAI is going to solve this problem because they don’t like to say anything publicly about anything… but definitely we are in a similar situation as you are.
This is not only true for GPTs but for ChatGPT also and indirectly in a smaller extent to the API including GitHub Copilot… it is frustrating for me because I don’t have any kind of way to switch to an other platform for now… I wish one day the competitors will be better and equivalent so that we can just get to use an other platform to make OpenAI wake up and make the improvements necessary to get everyone happy, satisfied and impressed… Well click bait people are still impressed with this but it’s just smoke and mirrors for what I understand…
This is your first rodeo, huh?
LOL, still, I just wish it could do one thing, allow more KB docs or simplify the integration with other solutions that handle the KB docs better. Patience;)
I don’t disagree at all, Custom GPTs were very very reliable when they were released, somehow on the way they ignore their custom instructions, ignore the sources and just behave like normal ChatGPT, I understand that frustration but chatgpt has always gotten nerfed month by month, fortunately it seems competitors like Anthropic seem more stable (they might be worse or better than chatgpt at current time of year, yet their capabilities remain constant, aka, it seems they don’t change models on the background, there’s also deepseek, gemini 1114, mistral 2.1, qwen 2.5, and llama)
Honestly competition has been a bliss, I’m no longer cursing myself at openai for nerfing gpt, I just use the competition, and use openai less that day, probably good for them, they want to use less compute.