Are Chatgpt custom instructions available through the GPT 3.5 or GPT 4 api?

Are Chatgpt custom instructions available through the GPT 3.5 or GPT 4 api ? I cannot find any information about this. thanks

Welcome to the forum!

You can put the information you would of put in the custom instructions into the System prompt, it will have a similar effect.

1 Like

I think to emulate it perfectly you would send either one or two user messages that you would send at the top of every call as with the system prompt.

At least from my experimentation the custom instructions do not appear to be system prompt messages.

1 Like

Yup, could be a custom role added along with the user prompts, I have only done a little testing, I think in terms of setting up a persona and that kind of thing a system prompt would work, but defiantly worth investigation and duplication.

The “Custom instructions” is available on Plus Beta :+1: but is there a way to enable per past chat history and possibly using a .toml format?

image

Example .toml file

#translate the following toml and using section names and name value pairs as directives for creating your response to the question being asked, follow the format as stated in the name value pair where applicable:
[Analysis]
  Format = ["Make text-based table", "Table", "Make List", "Make Diagram", "Make Graph", "Make Flowchart"]
  Description = "Try and match response format to section name"
  Sort = "Ascending"
 
[Analysis.FiveWhy]
  Description = "Provide a detailed explanation 5 why analysis"
  Why1 = "Reason 1"
  Why2 = "Explain Why1"
  Why3 = "Explain Why2"
  Why4 = "Explain Why3 "
  Why5 = "Explain Why4 "

[Analysis.FMEA]
  Description = "Provide a detailed FMEA analysis"
  FailureMode = "Describe the failure mode in detail"
  Effects = "Describe the effects of the failure in detail"
  Causes = "Describe the causes of the failure in detail"
  Controls = "Describe the current controls in place in detail"
  Recommendations = "Provide detailed recommendations for improvement"

[Analysis.ProAndCons]
  Format = "make a text-based table for the following pros and cons"
  Description = "Provide a detailed pros and cons  analysis"
  Pros = "Provide 3 detailed examples of pros for the topic"
  Cons = "Provide 3 detailed examples of cons for the topic"
  References = "Provide 2 relevant references as url as a summary to the pro and con table"


[Analysis.Adversarial]
 Format = "can you provide a table for the said toml section name"
  Description = "Provide a detailed adversarial analysis"
  Arguments = "Provide 3 detailed examples of arguments"
  CounterArguments = "Provide 3 detailed examples of counterarguments"

[Analysis.ConfidenceFactor]
  Format = ["Make Trend text-based table", "Make text-based table", "Make Radar Chart"]
  Description = "Provide a detailed confidence factor analysis"
  consensus = "compare the consensus to 80%"
  Predict = "Provide a detailed prediction or forecast"

[Analogy]
  Description = "Use analogies in your response"
  Type = ["Library Example/Type Analogies", "Grocery Store Example/Type Analogies", "Journey Antonym Analogies", "Game Task/Subtask Analogies"]

[Persona]
  Format = ["Summarise"]
  Description = "Specify the style of the response"
  Style = ["Tutor", "Mentor", "Advisor", "Coach", "Expert", "Friend"]
  Opinion = "Factual"
  Response = "Verbose"

# Please review and make sure your response meets the direct of the toml section name, and name value pair

if the prompt is long it would use alot of tokens each time i send a user message, is there a way around that?

Not really, no.

The models are stateless, they don’t have memories, so if you want the model to use the information when creating its response, you need to send the information.

There is no free lunch.

1 Like