Each version of the model is a distinct architecture with its own parameters and capabilities. You typically can’t run a model trained on GPT-4 on GPT-3.5 turbo directly.
Within ChatGPT, they often may have trials of different preview models going to selected users. You might have seen lots of requests for feedback, such as the normal thumbs up, “how’s this conversation”, or even a pop-up asking which of two is best.
Selecting your own model works against this.
API model selection is pretty much not an option. Earlier ones are not trained on the same chat container format, don’t call functions or have tuning needed for GPTs, and simply do not have “safety” or “efficiency” that OpenAI wants to impose.
If you want API features, like integrating AI in other products, you pay for the API.
A GPT can be shared, but only can be used by subscribers of ChatGPT Plus. Without that, others will just see a button that upgrade is required.
With the API, you can develop your own solution, but there is no “sharing” and having someone directly interact with OpenAI.
With API, you must develop an AI product on your own server, and also should have the accountability of user accounts, sending user inputs to the safety moderator AI first, and all other things that comes with operating such a service where you are paying for other’s usage by the amount of language data.
A similar question. I’ve been conversing with my Custom GPT using the voice option in the Android App. But I use it quite a bit and run out of my allotted time. I don’t need to share it with anyone, but I wonder if I can run it on GPT 3.5 instead?
You have a great question and an interesting case. For those with Plus, why not GPTs on 3.5?
About the only caveat that would prevent GPTs from using the 3.5 model is the model’s lower context length for understanding instructions, data, and lengthy conversations, and the cognitive abilities to correctly use functions such as browse with Bing and to write code without errors.
Your own GPTs could accommodate those limitations when targeting 3.5, just as API developers do when making custom AI applications.
For now, GPT is a feature only enabled by use of the GPT-4 ChatGPT model.
I have similar issue as I created a “quick dictionary” which is not quick enough when running on GPT-4. The ability to run it on 3.5 would be very helpful.
Note: When you create a custom GPT, no new model is created in the technical sense. The custom GPT is just extra instructions and data for the based model.
OK, nobody is going to want to do this, but here is a solution I just created.
I developed a RAG system using the OpenAI Chat Completion API. FYI, I could also use Assistants API to accomplish the same. It allows you to chat with a knowledge base of documents I have created on a particular subject. I also designed it to be able to receive (and respond to) questions via REST API.
So, I created a GPT that accesses this knowledge base API via an Action – and it works as expected.
However, as the knowledge base was using gpt-4.5-turbo-preview, it’s responses to questions was slow. Very slow. Many times beyond the GPT Action API timeout of 45 seconds. So, I solved the problem by changing the knowledge base model. I designed it to be able to also use gpt-3.5-turbo-16k, mistral-medium and claude-2.
Theoretically, you could solve your GPT speed problem using an external knowledge base running gpt-3.5-turbo-16k (via Actions). Now, of course, you need to be able to develop the knowledge base using Assistants or Chat Completion API, then make that knowledge available via an API itself. And using an API will add some latency.
But, it would solve the problem if you absolutely positively must use a GPT.
Again, it wasn’t a problem I was initially trying to solve, but a benefit I discovered as part of the solution I created.
The same text that you would normally put in the “Instructions” field of a GPT in the configure tab can just be pasted as a starting prompt in an ordinary ChatGPT 3.5 session. (not using the GPTs feature)
You won’t have actions. You won’t have the quick start buttons. You won’t have an easily shareable link to a GPT. You won’t have a GPT that appears in the GPT Store.
BUT… your GPT will be able to answer more than a dozen prompts before announcing you’ve used up your quota for the day. On a paid account. Sigh.
GPTs are basically just prompt grounding with a bit of glue to actions.