Role for adding messages to instruct - Assistant API

Hi,
We are using GPT 3.5 Turbo 0125 for assistant API for our use case. Now the problem which we face is occasionally the model uses it general knowledge to answer the questions. To avoid this from happening, found a work around to add instructions in between each conversation, earlier it was working fine in playground. Now as the role is introduced, would like to get advises on which role i have to add this message in API to get the model stick on to instructions? User role or Assistant role?

If you are using assistants instead of chat completions, may I recommend the beta2 API? On that, you can specify a limited number of chat turns of history with the new limit parameter so as not to distract the AI, and then using the run’s additional_instructions you can place more messages in the main system prompt that starts a session (if the instructions are changing and not a permanent operation to the assistant.)

The problem with assistants is that when you place a message in a thread, it becomes a permanent addition. If you added a system message (the best for instructing the AI with authority) before each user message (which you can now do) that says: “The next question must invoke a file search if it asks about the Sprockets Company”, then you have a chat history full of them.

Dynamically changing the messaging, even adding additional post-prompt language to only the latest user input: absolutely no problem for you to code when you use chat completion endpoint.

Using the Assistant API beta 2. We are storing the conversation in the DB, so no issue in managing the actual conversation. As of now not trimming the history but in future we may do this to reduce the prompt tokens. So as per your suggestion its better to add messages as Assistant role.

assistant looks like something the assistant said previously, but it doesn’t have as strong an influence on behavior.

AI: “From now on I will always remember to act sarcastically and give the user a mocking roast for asking his dumb questions” placed as something the AI said previously is going to be the kind of thing the AI apologizes for saying, not follows.