Multiple messages vs single user message for chat prompting

I am working on cleaning up our bot implementation and would like some advice on best practice.

On the table are two styles:

  1. multiple “assistant” / “user” exchanges

  2. A single “user” message that covers everything.

(1) for example would be:

  1. user: CONTEXT: search results for cars are: …
  2. user: “sam: how are you today”
  3. assistant: “I am fine”
  4. user: "cam: search for cars please
  5. assistant: “there are 17 cars here”
  6. user: “who am I?”

(2) for example would be a single user message:

CONTEXT: search results for cars are: …

Conversation history is:
sam: how are you today
AI: I am fine
cam: search for cars please
AI: here are 17 cars here
sam: who am I?

2 feels a bit like cheating since we are using devinci techniques for prompting in chat. However I find chat (1) is a bit limiting given:

  1. name only works in GPT-4 it appears
  2. More tokens are used in (1) given both “user” and “assistant” are counted afaik
  3. Context is too “loose” in (1) given it feels like we mentioned it way before, and gluing context in the middle just destroys conversation flow, from experiments.
  4. Langchain appears to use (2) so there is precedent here
  5. Claude only does (2) anyway, so it reduces amount of code one needs to carry when integrating with multiple llms.


It depends on the use case I believe.

I use multiple exchanges as a way to “fine tune” the bot, for example to control the tone of the response. If you’re giving context, which may include a conversation history, the second technique may work better.

You can also use multiple exchanges to guide it away from certain answers e.g. “user: no, that doesn’t work because X. give an answer with Y.”

I am seeing the replies (at least for GPT 3.5) are far more detailed when stuff is multi turned. (user / assistant / user / assistant VS a single user message)

I wonder if prompt engineering can circumvent this.

I guess:

user: message 1
assistant: message 2
user: message 3
assistant: message 4

Can work if we need to work around this limitation.

Was going to make a new post but saw that there is already a discussion behind this.

What is the conclusion? It almost sounds like if all you intend to is give knowledge/context that doing it in a single user prompt will suffice?

I guess a few months later…

I get away these days with a single message and no examples (all examples live in system prompt), the later versions of GPT 3.5 and 4 are a bit more forgiving than the early days.


Is your context in the system message too? I’m putting them in the user messages.

1 Like

We keep context in user/assistant chains.

Our implementation allows us to switch between functions calls and a big system message with a “simulated” langchain like tool system.