Assistant thread instructions work better than assistant's instructions themselves

Hello, I am working on a personal project related to OpenAI assistants.

I have noticed that the instructions I provide to GPT within the thread work significantly better than the ones I use for creating the assistant.

For instance, when I have certain functions that require validation by GPT before execution, the instructions provided within the thread are followed much more effectively. However, the instructions provided for the assistant seem to be ignored most of the time.

I’m curious about the purpose of these instructions. Why should I use the assistant’s instructions if they don’t seem to work reliably? Would this potentially increase the cost of using GPT?

I understand that the instructions within the thread don’t necessarily override the original instructions since GPT still retains information from both sets. Currently, I’m combining both sets of instructions, but I’m contemplating either shifting everything to the thread instructions, especially considering the need for dynamic values, or maintaining the general flow of conversation within the assistant’s instructions while placing the validation tasks within the thread instructions.

What are your thoughts on this?

When you initiate a /run command, there are two fields that determine how the Assistant works:

a) instructions
b) additional_instructions

In your example, you seem to have used instructions. This will override the original instructions given, and so the results match what you described.

What I would recommend, is to use additional_instruction, which will add new instructions along with the original instruction.

Remember, the additional_instruction is only valid for that run. So if you had to maintain it, or grow the additional data, you would need to save the state.

Have you used the run instructions?

As of now I am just sending the same instructions on every run. It works for me tho

1 Like