Are 'Custom GPTs' independent from ChatGPT?

I am investigating. I notice that ‘Custom GPTs’ are combining doing your own fine-tuning, actions (plugins), all through the API. Fine. But if I build a ‘Custom GPT’ am I independent from ChatGPT? For instance, ChatGPT does its own analysis and handling of the prompt and it calls its own plugins (e.g. python for doing calculations), and it has its own ‘harmless’ filtering. Is a ‘Custom GPT’ still enhanced/protected by that? Or, if you build your own, are you responsible to do your own enhancement/protection? Can you include OpenAI’s enhancements/protections?

There is no fine-tune being done. You are not changing the AI model of ChatGPT.

How does a session start when using a GPT? With the same system message “You are ChatGPT…”

Besides your additional instructions, a “GPT” is just the possibility of an additional tool for knowledge from files, and the plugin-style external actions.

It runs in someone’s ChatGPT Plus account. With whatever AI model is provided when selecting GPT-4.

GPT instructions can make an AI less likely to deny - or because they are framed as “from a user”, rude instructions are more likely to have any operation of the GPT denied.

The safety moderation that scans input and output, flagging it in color, is not changed.

This answer wasn’t quite clear (for me, at least) and I have no idea who marked this as a solution, but it wasn’t me.

If people add their own fine-tuning to GPT-4, it stands to reason that they are in fact created a slightly changed set of parameters for the GPT4 algorithm to use. Of course, the algorithm (transformers and all) remains the same, but it is the parameters that make up a lot of what the model actually does.

So, my understanding is:

  • Person A can add fine-tuning to GPT-4 thus creating a (very slightly) changed parameter set to use
  • Person A can add actions (formerly plugins)
  • Person A can add specific prompt changes/additions to always use (in-context learning)
    These three make up ‘a’ GPT (which then can be offered for others to use in the store).

Is this correct?

Thanks for the rest of the answer.

Fine-tune is a rather lengthy procedure of customizing the weights of a model through reinforcement learning algorithms (similar to the initial training of AI) and development of new training sets and sequences upon which to reweigh the model.

This experimentation can be done on the API, at one’s significant expense, to develop specialist or altered AI.

GPTs are not “tuning”. We don’t use fine-tune or fine-tuning to describe parameters or prompts.

Everything about GPTs is just providing the ChatGPT AI with different prompt context inputs. A GPT is primarily instructions, followed by instruction of tool functions the AI can emit (the overall method of which ChatGPT model was trained on by OpenAI).

So yes, one can “instructions”, provide “knowledge” files, specify function “actions”, and then with account verification, share the agent framework that has been specified with others.

Thank you. So, GPTs (for now) is in-context learning (context size depending on model) and plugins/actions.

It is also possible to add your own fine-tuning to the model (https://platform.openai.com/docs/guides/fine-tuning) but this is available only for a few smaller models.

It is unclear if these two can be combined yet (but at some stage they probably will)