I have created custom GPT (my GPT) on GPT-4. How so I run in on GPT 3.5 turbo?

Dear All

I have built a custom model using my GPT 4 Plus account (using “Create a GPT” tab).

Since GPT-4 is expensive, I want to run my custom model on GPT 3.5 turbo.

How do I do it?

And I want to make it available for users through Whatsapp. How do I do that?

Thank you in advance


None of that is possible.

GPTs are a feature only available to ChatGPT Plus subscribers within the chat webpage or app.


Each version of the model is a distinct architecture with its own parameters and capabilities. You typically can’t run a model trained on GPT-4 on GPT-3.5 turbo directly.

If possible, add the option for a better choice, such as GPT-3.5 (0301,0613,1106), GPT-4 (0314,0613,1106).

Within ChatGPT, they often may have trials of different preview models going to selected users. You might have seen lots of requests for feedback, such as the normal thumbs up, “how’s this conversation”, or even a pop-up asking which of two is best.

Selecting your own model works against this.

API model selection is pretty much not an option. Earlier ones are not trained on the same chat container format, don’t call functions or have tuning needed for GPTs, and simply do not have “safety” or “efficiency” that OpenAI wants to impose.

If you want API features, like integrating AI in other products, you pay for the API.

I am willing to pay for the OpenAI API.

How do I make my custom model on GPT 4 available to common users through Whatsapp?

A GPT can be shared, but only can be used by subscribers of ChatGPT Plus. Without that, others will just see a button that upgrade is required.

With the API, you can develop your own solution, but there is no “sharing” and having someone directly interact with OpenAI.

With API, you must develop an AI product on your own server, and also should have the accountability of user accounts, sending user inputs to the safety moderator AI first, and all other things that comes with operating such a service where you are paying for other’s usage by the amount of language data.

A similar question. I’ve been conversing with my Custom GPT using the voice option in the Android App. But I use it quite a bit and run out of my allotted time. I don’t need to share it with anyone, but I wonder if I can run it on GPT 3.5 instead?

You have a great question and an interesting case. For those with Plus, why not GPTs on 3.5?

About the only caveat that would prevent GPTs from using the 3.5 model is the model’s lower context length for understanding instructions, data, and lengthy conversations, and the cognitive abilities to correctly use functions such as browse with Bing and to write code without errors.

Your own GPTs could accommodate those limitations when targeting 3.5, just as API developers do when making custom AI applications.

For now, GPT is a feature only enabled by use of the GPT-4 ChatGPT model.

1 Like

+1 for this.

I have similar issue as I created a “quick dictionary” which is not quick enough when running on GPT-4. The ability to run it on 3.5 would be very helpful.

Note: When you create a custom GPT, no new model is created in the technical sense. The custom GPT is just extra instructions and data for the based model.

I have the same issue. Speed is more important. Have you figured out how to create GPTs with 3.5 turbo?

OK, nobody is going to want to do this, but here is a solution I just created.

I developed a RAG system using the OpenAI Chat Completion API. FYI, I could also use Assistants API to accomplish the same. It allows you to chat with a knowledge base of documents I have created on a particular subject. I also designed it to be able to receive (and respond to) questions via REST API.

So, I created a GPT that accesses this knowledge base API via an Action – and it works as expected.

However, as the knowledge base was using gpt-4.5-turbo-preview, it’s responses to questions was slow. Very slow. Many times beyond the GPT Action API timeout of 45 seconds. So, I solved the problem by changing the knowledge base model. I designed it to be able to also use gpt-3.5-turbo-16k, mistral-medium and claude-2.

Theoretically, you could solve your GPT speed problem using an external knowledge base running gpt-3.5-turbo-16k (via Actions). Now, of course, you need to be able to develop the knowledge base using Assistants or Chat Completion API, then make that knowledge available via an API itself. And using an API will add some latency.

But, it would solve the problem if you absolutely positively must use a GPT.

Again, it wasn’t a problem I was initially trying to solve, but a benefit I discovered as part of the solution I created.