Conversation API and Agents: How to manage user-facing messages?

We are building an assistant composed internally by multiple agents (Agents SDK), single LLM completions and deterministic steps. We are leveraging the Conversation API to store the history. We pass the conversation id through all the agents and the LLM completions, in this way the each agent/step has history information and new messages are automatically added. To present the message to the user in the chat interface, we use the Conversation API endpoint to retrieve all the items, filter out what is not interesting for the user (tool call, handoff, etc.) and decode the content.

The issue is that in specific condition we are not able to distinguish if an item is a message within the agentic system or a message that should be shown to the user. A basic example would be:

user > "What about X?"

[agentic system beging]
    [routing agent]
      system > ...
      user > "What about X?"
      assistant > "user agent Y"
    [agent step]
      system > ....
      user > "What about X?"
      assistant > "X is foo bar"
    ....
[agentic system end]

The point is all messages in routing agent and agent step are written to the Conversation API (they need the history), but we can’t distinguish the three “What about X?” and the “user agent Y” assistant message when we present the conversation to the user.

It seems to me a common problem, is there a known pattern to use?

My current takes are:

  • Create two conversations for the same user, one user-facing and the other internally. The user-facing is managed (write/read) manually using Conversational API directly, the internal one is automatic using the conversation ID parameter in Agents SDK/LLM completion
  • Make every item (even user input) a structured message so we can add metadata per message. This however would be seen by the LLMs as inputs

What am I missing?