Providing context to the Chat API before a conversation

Welcome to our dev community.

System message also plays a big role - though maybe not as much as at the moment. OpenAI is working on improving ChatML (the system/user/assistant method for prompts) to make it even better, I’m sure.

Until then, I would play with putting your “instructions” into the system and then trying the system prompt BEFORE the user / assistant or after… people have had success with both. I think putting system last might weigh it a bit more. (OpenAI has said the weighting on system will likely change, though…)

Another thing you can do is put the instructions into the system message and feed it one user / assistant exchange as an example… so…

System message: (Rewrite your instructions here…)
User message: Make up a user question here.
Assistant message: Make up an assistant answer here that fits your instructions.
User Message: (Append user new input and send to API…)

or…

User message: Make up a user question here.
Assistant message: Make up an assistant answer here that fits your instructions.
User Message: (Append user new input and send to API…)
System message: (Rewrite your instructions here…)

… at least for the first exchange. If you’re going to be chaining them, after you have a few user/assistant exchanges, you won’t need to append the made up one to the beginning.

It’s really all about experimenting for your particular use-case and seeing what works best. Keep in mind, though, that ChatML is likely going to change in the months/years ahead as it improves.

Also, “negative prompts” - ie telling LLM not to do something, is sometimes difficult depending on the size of the prompt, etc. It can easily get confused as not or do not is only a token and it might be surrounded by a lot of tokens that suggest it should do that very thing. So, it’s best to phrase things as a positive sometimes - tell it what you want not so much what to avoid.

5 Likes