Matching UI Custom Instruction Reliability with GPT-3.5 system messages in the API?

I’ve observed a higher degree of reliability to follow the instruction in the custom instruction UI compared to when providing instructions via the system message in GPT 3.5 Turbo API. ( even when providing the same instruction templates.)

Am wondering how to achieve the same degree using the system messages? . It would be great if you share your experience on this specifically for GPT 3.5 .

1 Like

Yes, sir, we are here to learn and cooperate with each other in all fields in order for development and prosperity to prevail in this virtual world.

I don’t think the custom instructions are part of the systems message.

Because normally you won’t be able to exceed the 4096 tokens limit on prompt and completion in one request (ok, maybe the system prompt is not part of the 4096 tokens so in reality the model will answer on the 8k that GPT-4 provides - which still leaves the following options valid).

But I have added 400 tokens to the custom instructions and they don’t add up.

So I conclude you won’t be able to use a system prompt (or your prompt just needs to be better - because I actually have no problems anymore to let GPT-3.5 follow the instructions).

What you could do is splitting up the request, with the model in one request, then ask for a summarized answer on it from different viewpoints then give the view points to another api request and ask for a response that answers the user request but also has to keep in mind to use the following [the summarizations]

Or you could use a vectordb to store the user information and add a function to the api call that asks the vectordb for specific informations the response should have.

Or you take the users prompt and the response and send it both to the api and ask for a list of bulletpoints on how the response could be better and then take the user prompt, the response, the bulletpoints and call the api a third time and ask to implement the changes suggested in the 4 most important bulletpoints…

Lol I am way to deep into that… I can easily come up with another 20 solutions :grin:

3 Likes

According to the “help” documentation custom instructions should be similar to the system prompt.

system messages are to our API as custom instructions are to ChatGPT in the UI and custom instructions don’t offer additional token savings.

So, in this case it may be more of a question how to set temperature and top-p for the API calls?

2 Likes

The sampling parameters temperature and top-p don’t change the behavior. They only change the allowed deviation from the behavior. The amount of alternate token choices that are possible when generating a response.

Since the exact insertion of custom instructions is unclear, replicating them exactly is also unclear. Since there are different boxes for user info and AI behavior in ChatGPT custom instructions, they could be prefixed with text that describes to the AI what they should do, or they could have their own role names that the API can’t produce.

3 Likes

I’d say the conversation has progressed far enough to restate the task at hand:
GPT 3.5 does not follow all instructions in this case and we disregard the code interpreter.

Maybe we should look at an example?

2 Likes

Intelligence works miracles, that’s my motto