CustomGPT "Learning" over time

I am curious of a CustomGPT learns more over time or by use/feedback, or, once I finish my rules/configuration and upload my files, does it know everything its ever going to know immediately ?

In itself, the OpenAI-backed Bot has a lot of information. The goal of the GPT is to first specialize its knowledge.

In Natural Language you can tell it, “I want you to be an HTML expert”.

Now, the bot already knows that it is an HTML expert, the problem is that although it has the instruction, it surely does not know how to give that information, whether it is for novice, intermediate or expert users.

This is where we have to specify how to give that information, and we must be as detailed as possible. Now, you can give your own knowledge base in a PDF file that talks about the specific topic, but you must be very clear on how to provide that information, and at this point you should also tell him what he should not do, because if you ask him for a food recipe, even if he is an expert in HTML he will give you the food recipe, that is where you should tell him that he will not give more information unless it is HTML.

The learning of the bot is not by us, it is by OpenAI that specifies that in each interaction with the users it learns not on the basis of knowledge, but on the basis of natural language, because its knowledge base of training is taken from the Internet. So remember, your job is to specialize him in a particular topic by telling him what to do, telling him what not to do.

Now, you can tell him, you are an expert in HTML, and your knowledge base is the HTML.pdf file.

1 Like

I don’t think so. The gpt is just a kind of prompt. Each time you open the gpt it submits that prompt. This means as well that the behaviour is each time a little different. Sometimes better sometimes worse.

It is not our job to train the model, but simply to tell it how to give that information, to give it detailed instructions, to be as specific as possible. I’ve found that natural language lends itself to ambiguity, so I focus more on JSON files to tell it what not to do.

  • Get me a cup of coffee.
    On the way for the cup of coffee is a Toy( Error ).
    -Get me a cup of coffee.

-Bring me a cup of coffee.
-Don’t bring me anything else.
-See down the stairs, open the door, go out into the street, cross over the zebra.
-Beware of cars, cross only if the light is green.
-Go into the store, ask for the cup of coffee.
-Go back the same way.

A custom GPT does not “learn” over time, no matter how often it is used or what feedback is provided.

OpenAI may use aggregated data to improve their models, but that is the most detailed language we have available as to what that actually means. Keep in mind, this is how they improve ChatGPT, the “base” model. A custom GPT is essentially a layer above this stack.

So, as mentioned above, the knowledge files become instantly retrievable as context, but the prompting helps you express how the GPT should use this context provided. It’s not necessarily “learning”, but rather the “walkthrough” guidebook that supplements the model.

For additional clarification: Each ChatGPT conversation is an “instance” of the model, meaning it is like you are pulling in a different ghost of the original model to talk to. It does not “learn” in a traditional sense, but rather takes in all of the context it has been provided to whip up a response. It may be exhibiting behavior that would look like learning, but the only way for it to truly “learn” is through fine-tuning (or hard training).

When you create a custom GPT, everyone can call up their own ghost of your GPT. When the conversation is over, the ghost fades away back into the ether. There is no exchange of information between any ghosts / inferences of a model. This is true for all LLMs not running locally on a home computer.

3 Likes