Single system prompt vs multiple prompts in a single query

Recently came across technique called “post-prompt” (the last message in the query) and found it to be very useful to improve obedience to “last-minute” instructions.
Meanwhile, I tend to add other system instructions in between while compiling my query, e.g. current date/time, availability of human agents, etc

Now, tried to inject another system message before context to pay attention to specific details in context (to avoid assuming and inventing) and it seems to be ignored.

Question: what are your observations/experiences regarding multiple system messages scattered across query? Do they tend to improve or worsen model’s obedience to instructions?

1 Like

Hi, this seems quite interesting.

I am thinking about using these “last-minute” instructions to suggest how to handle corner cases from tool response (e.g. specific error responses or no results from queries, etc).

I can’t find any reference to the “post-prompt” term online, do you have any reference?

Also, have you made any experience on gpt-4o or which other model? Any evidence which you can share?

Thanks in advance!

Here is one particular: How to prevent ChatGPT from answering questions that are outside the scope of the provided context in the SYSTEM role message?

1 Like