What makes a custom GPT superior to just prompting ChatGPT?

Are there any differences between using a custom GPT versus just prompting ChatGPT (other than not having to paste an initial prompt at the beginning of every conversation)?

For example, if I wanted to interact with an expert of coding, would there be any differences between the following:

  1. A custom GPT that has been instructed to act as an expert of coding.
  2. Standard ChatGPT that has been prompted to act as an expert of coding.

Also, I know that one obvious advantage to a custom GPT is giving it actions that it can take (such as API requests), but my main question is just about the instructions/prompting.

I apologize if I’m not wording my post and questions in a sufficient way, but I’m just curious about this stuff.


No. Not at all.

Besides maybe s GPT with instructions that only someone very knowledgeable in the industry could prepare, there’s no benefit.

Even then, it’s pretty straightforward to have the vanilla cGPT create these instructions.

Besides actions, personally I find GPTs to be a gimmick.

BUT, in the future it may be that cGPT or a GPT can call other for assistance automatically. That would be neat.

Yet again though, it would be more useful to have your own built for yourself and not a third-party one

1 Like

In my opinion, unless you use GPTs with actions, there is really no advantage, especially for coding. As far as I understand, GPTs still use GPT-4, so the standard model using GPT-4o will give better and faster responses. When it comes to coding in particular, actions are really not that useful. At the same time, GPTs code much slower than the base model.


I think the main benefit of a custom GPT vs generic chat interface is the ability to add additional reference content to the GPT. For example, you could add some examples of the type/style of output you’re expecting and reference these in your system prompt (GPT instructions). The power of a good system prompt coupled with custom knowledge cannot be overstated.


I am creating a custom GPT for worldbuilding.
Before, I had three or four massive documents in org-mode (that GPT can read just fine, BTW) that I used to get a conversation going. With a custom GPT, I have far more than that loaded into the model.

For coding, there are custom models for Clojure and Common Lisp, and so far, I have found them to be far more accurate and less error-prone than the vanilla model.

There are models for DynamoDB and Cloudformation in AWS. Both far more accurate and less error prone in these tasks than the vanilla model.

If all you do all day is program in Python, Javascript, or Java, you are probably not going to see much of a benefit. But for programming AI space, those are the “golden” languages everyone is focusing on.

It really depends on what you want to focus on, and how you want to go about solving yours or some one else’s problem.


I’ve been trying to find any information on what model the custom GPTs use and have turned up nothing. No way to select the model or access advanced settings. Do you know where you saw that they use GPT-4? And do you have a guess if/when they’ll use GPT-4o?


Probably obvious, but the other advantage of a custom GPT (and the reason I use it) is if you want other people to benefit from your work. I make GPTs that help people learn to run tabletop roleplaying games, and I would have to host this myself if I wanted to share with other people in the same way.

1 Like

Yes! There are definite differences between the base model and CustomGPTs.

The degree of specialization available to GPTs through their Instructions, Knowledge Base, and Actions are all game changers.

The base model they run on could be 4 or 4o. :man_shrugging: @bjbosco There is no easily accessed information about what model the CustomGPTs use. In a recent email, OpenAI said that all CustomGPTs would be phased on to 4o eventually, but who knows when. The idea is to have cGPTs running on the highest available model.

Prompting a model to be an “expert in coding” is all well-and-good, it will do it’s best. But that prompt is only available in the current conversation, and it’s hard to tell when prompts fall out of the Context Window if you’re having a long conversation.

I think @RonaldGRuckus has a good point that third-party GPTs haven’t figured out how to be valuable yet. But PersonalGPTs are the beez kneez, friend.

1 Like

hi! unfortunately gpt4o is really slow for me when i wrote multiple messages. does someone know how i can change the settings in custom gpt to gpt4 instead of gpt4o?

1 Like

I see the primary benefit is providing it a bunch of reference documentation it can pull from. Though with GPT 4o, it seems a lot less likely to actually use the documentation without heavy guidance, like absurdly so.

Adding the text below to one of my GPTs instruction prompt seems to have worked well. (And yes the last line in all caps was necessary :skull:) Otherwise even if I asked it to reference its documentation it would just start blabbering without searching its knowledge.

IMPORTANT AND ESSENTIAL: You must !! ALWAYS !! reference your provided knowledge documents, even if the user does not specifically ask you to! It is entirely the point of it being there, and you have proven unreliable when you do not reference it! You have been given those many documents and you MUST search through it when providing ANY answer, INCLUDING follow up questions! To be clear: Do not send ANY message without first referencing your knowledge! You must search your knowledge even if you think you know the answer already - it doesn’t matter, always look through your provided reference knowledge documents!



Yes, it will search the document every time before answering instead of learning it. I don’t know if there is something wrong with my settings, but it makes me feel that it is not smart enough.