Hey there and welcome!
custom GPT news has been pretty dark for a while. This isn’t exactly unexpected though I’ll say that much.
The problem many GPT builders faced is that many people loved building them…for themselves. Not many people use other people’s GPTs, but people tend to use their own if they build it.
OpenAI seems squarely focused on agentic stuff right now along with the rest of the industry. When this era begins to simmer down, things start to mature a little bit, we may see some kind of evolution in the form of a custom agent system, or custom agentic GPTs (maybe even with built-in MCP support or something), but that’s purely my speculation.
After being active around this space for a couple years now, it’s clear that custom AI stuff isn’t going to be like the web. They tried plugins, and that failed. Then they tried custom GPTs, and excluding a couple outliers, none of them ever really took off, because ultimately many of the things custom GPTs did could have been done on the base models, and for the stuff that couldn’t, people just developed with APIs and coding tools, either from knowledge or vibe coding.
People have very strong opinions about vibe coding at the moment, but regardless I think what happened was that instead of people relying on a limited interface to build kind of what they wanted, they just used the same models to generate the code they actually wanted to do instead. Which makes sense if you think about it. Custom GPTs were meant to be an alternative to coding, but came with a very limited feature set unless you already knew how to code, effectively defeating the purpose. Instead, people realized the better “alternative” to coding something yourself was to copypasta whatever the LLMs made and just keep winging it until it did what you wanted. The barrier to programming that custom GPTs were trying to solve was instead solved by the greater intelligence of the models themselves.