Does prompt id persist across a conversation id in Responses API

Hi everyone
I am using the OpenAI Responses API with conversation id and store true for a multi turn chat app
My prompt is stored in the OpenAI platform as a prompt id
I send prompt id only on the first user message and then for later turns I call responses create with the same conversation id but without prompt id and without instructions
I notice the assistant drifts and does not follow the original rules as well.

Conversation creation

conversation = client.conversations.create(
    items=[
        {"role": "assistant", "content": "Initial greeting or setup message"}
    ]
)
conversation_id = conversation.id


First user message

response = client.responses.create(
    conversation=conversation_id,
    model=model,
    prompt={"id": BASE_PROMPT_ID},
    input=user_message,
    store=True,
)


Next messages

response = client.responses.create(
    conversation=conversation_id,
    model=model,
    input=user_message,
    store=True,
)
  1. When using conversation id does the content of a prompt id become part of the stored conversation automatically after the first call or is it applied only to that single request?

  2. If prompt id does not persist do best practices recommend sending prompt id on every message to keep consistent system developer constraints?

  3. If I want to avoid sending prompt id every time what is the correct pattern to persist the base rules across the conversation for example by adding a developer message at conversation creation?

Treat “prompts” (the presets you create in the platform site) as being another input like the “instructions” field (that prompts also contain) - they must have their ID sent every API turn, along with the state of variables to populate in them if you use that.

The text and tools and model of a prompt is not persisted by reuse of a previous response ID, nor reuse of a conversation ID.

You can be sure that your prompt is being used - do not send over-riding API parameters such as model ID in your API call; rely on the prompt ID for every field that is stored. Get an error if it doesn’t work. It is your “assistant”.

2 Likes

Helpful question :+1: Prompt IDs don’t persist with the conversation by default, so the drift you’re seeing makes sense. In practice, the safest approach is either sending the prompt ID on every turn or embedding the base rules as a developer/system message when creating the conversation to keep behavior consistent.

1 Like