CustomGPT "Learning" over time

I am curious of a CustomGPT learns more over time or by use/feedback, or, once I finish my rules/configuration and upload my files, does it know everything its ever going to know immediately ?

In itself, the OpenAI-backed Bot has a lot of information. The goal of the GPT is to first specialize its knowledge.

In Natural Language you can tell it, “I want you to be an HTML expert”.

Now, the bot already knows that it is an HTML expert, the problem is that although it has the instruction, it surely does not know how to give that information, whether it is for novice, intermediate or expert users.

This is where we have to specify how to give that information, and we must be as detailed as possible. Now, you can give your own knowledge base in a PDF file that talks about the specific topic, but you must be very clear on how to provide that information, and at this point you should also tell him what he should not do, because if you ask him for a food recipe, even if he is an expert in HTML he will give you the food recipe, that is where you should tell him that he will not give more information unless it is HTML.

The learning of the bot is not by us, it is by OpenAI that specifies that in each interaction with the users it learns not on the basis of knowledge, but on the basis of natural language, because its knowledge base of training is taken from the Internet. So remember, your job is to specialize him in a particular topic by telling him what to do, telling him what not to do.

Now, you can tell him, you are an expert in HTML, and your knowledge base is the HTML.pdf file.

1 Like

I don’t think so. The gpt is just a kind of prompt. Each time you open the gpt it submits that prompt. This means as well that the behaviour is each time a little different. Sometimes better sometimes worse.

It is not our job to train the model, but simply to tell it how to give that information, to give it detailed instructions, to be as specific as possible. I’ve found that natural language lends itself to ambiguity, so I focus more on JSON files to tell it what not to do.

  • Get me a cup of coffee.
    On the way for the cup of coffee is a Toy( Error ).
    -Get me a cup of coffee.

-Bring me a cup of coffee.
-Don’t bring me anything else.
-See down the stairs, open the door, go out into the street, cross over the zebra.
-Beware of cars, cross only if the light is green.
-Go into the store, ask for the cup of coffee.
-Go back the same way.

A custom GPT does not “learn” over time, no matter how often it is used or what feedback is provided.

OpenAI may use aggregated data to improve their models, but that is the most detailed language we have available as to what that actually means. Keep in mind, this is how they improve ChatGPT, the “base” model. A custom GPT is essentially a layer above this stack.

So, as mentioned above, the knowledge files become instantly retrievable as context, but the prompting helps you express how the GPT should use this context provided. It’s not necessarily “learning”, but rather the “walkthrough” guidebook that supplements the model.

For additional clarification: Each ChatGPT conversation is an “instance” of the model, meaning it is like you are pulling in a different ghost of the original model to talk to. It does not “learn” in a traditional sense, but rather takes in all of the context it has been provided to whip up a response. It may be exhibiting behavior that would look like learning, but the only way for it to truly “learn” is through fine-tuning (or hard training).

When you create a custom GPT, everyone can call up their own ghost of your GPT. When the conversation is over, the ghost fades away back into the ether. There is no exchange of information between any ghosts / inferences of a model. This is true for all LLMs not running locally on a home computer.

4 Likes

I’m new to creating GPTs, but loving the idea and possibilities.

If I did want my GPT to learn about me (I’ve a personal trainer gpt), can I use an API to write data from each interaction to a spreadsheet, then train my GPT to always reference that spreadsheet in responding?

Every session where you interact with a GPT is separate, and there is no built-in permanence mechanism between sessions that can allow a memory feature. In fact, the “memory” of ChatGPT (where it can write little snippets about you that it wants to remember) is turned off during GPT interactions.

If it was your personal unshared GPT used only in your account, you could upload a text document (not a spreadsheet). This becomes knowledge that the GPT AI can search by search query, a tool called ‘file_search’. You can say, “before responding, always use file_search to find if there is more information about the topic” or “file_search contains important preferences about the user, always search using a summary of the latest question” . It is not automatically placed or learned, nor complete.

External GPT tools (used like that file search tool) which can call an API you create on a server are also possible, called Actions, but again, the AI must decide to find the description of the tool useful for answer, invoke it, get a response, and then proceed. This is mostly useful for on demand services, like “get the weather” or “personal trainer schedule”.

Automatic persistent placement of information is not an option given in ChatGPT. That would need custom development programmed on the pay-per-use API to both have an AI with a unique purpose, and total control of automatic injections of knowledge text for behavior, that could be dynamic or which can be updated by the AI as a cross-session memory.

3 Likes

How would be those instructions formated as json and why is that better than text with bullet points?
How you generate instructions like “do not bring anything else”?

That’s super helpful. I think I need to go down the pay per use API route. I appreciate your detailed examination, as a newbie this community is.gilden. :pray:t2:

1 Like