Why doesn't OpenAI provide a curated support gpt for building gpts?

Hi, is a support gpt insurmountable for OpenAI to produce?

I appreciate that development and maintenance of a robust support chatgpt for developers is non-trivial, including accounting for new GPT features and functionality. However, it would be invaluable for OpenAI’s business. If OpenAI can’t do it, then let’s stop pretending AI support chatbots are much more than glorified searches and canned responses.

The existing documentation would be a starting point for a true support GPT, supplemented with manual additions by OpenAI staff, along with select discussions (the accurate ones) from the forums. Yes, humans will be needed. And it won’t always be right, so just throw in some disclaimers.

I’m not an expert with GPTs, and a lot of the questions, and issues that I encounter, could easily be answered by an informed source, rather than me constantly looking for breadcrumbs in conversations the OpenAI forums and elsewhere.

1 Like

I think this is one of the core issues. Apart from disclaimers being ignored by most people, customer support is often finely tuned to technically never be wrong, and certainly never to be liable for anything.

If you’ve ever talked to humans in a support role, you know that they’re generally no better than last generation chatbots with dialogue trees. This is because they have a script they have to follow.

This isn’t a technology issue. People hate ‘dumb’ chatbots because they have limited options and can’t actually solve critical issues. And human reps can’t do more than that either, because they often operate on the exact same script. The only time a customer will be satisfied is if they find someone who is willing to go off script to find a solution* - this person however will then typically be docked for going off script.

* there’s exceptions to this, like the godaddy case, but that’s a whole other can of worms

And LLMs will often go off script. Most enterprise efforts in 2023 were attempts at somehow getting GPT-3.5/4 to stay on script - which is pretty dumb IMO but that’s where a lot of the money went and excitement died.


So yeah, I suspect OpenAI can’t, because like most companies, it doesn’t want to. The company becomes aware of issues, and then solves them wholesale - but there’s no provision to solve individual issues, either by human, or by machine.

Better say and do nothing, rather than run the risk of occasionally being wrong.

I think if we look at how people react to mistakes, waiting for an opening to pick anyone or anything apart, it’s not the worst strategy, unfortunately :thinking: