Will developer role messages persist with previous_response_id in Responses API?

I’m working with the Responses API and using previous_response_id to manage conversational state. My goal is to set a system instruction once, along with all the previous input messages, and have it persist across multiple turns without manually re-sending it. The previous_response_id parameter will reference the context across the generated responses.

The documentation notes that when using previous_response_id, any system/developers instructions provided directly in the instructions property of a new request will not be carried over from the previous turn. But it is not strictly necessary to use the instructions property to insert system/developers instructions. One can still use the old way: input messages with the developer role.

This leads to my question: If I create a response where one of the input messages has the role: "developer", will that developer instruction be automatically carried over and applied when I create a subsequent response using its previous_response_id?

In other words, is using a developer role message a valid way to create a persistent system prompt for a conversation managed via previous_response_id?

Example Scenario:

  1. I create an initial response (resp_abc) and include a developer message in its input items.

    Input Messages for resp_abc:

    [
      {
        "role": "developer",
        "content": [
          {
            "type": "input_text",
            "text": "You are a world-class developer."
          }
        ]
      },
      {
        "role": "user",
        "content": [
          {
            "type": "input_text",
            "text": "Hello! Bla Bla Bla Bla Bla Bla Bla Bla Bla"
          }
        ]
      }
    ]
    
  2. Now, I want to create a new response (resp_xyz) and continue the conversation by referencing the first one.

    Request Body for resp_xyz:

    {
      "previous_response_id": "resp_abc",
      "input": [
        {
          "role": "user",
          "content": [
            {
              "type": "input_text",
              "text": "Write a simple 'hello world' function in Python."
            }
          ]
        }
      ]
    }
    

Will the model generating resp_xyz still remember and adhere to the “You are a world-class developer” instruction from resp_abc’s context?

TL;DR: When using the Responses API, if I include a message with role: "developer" in a response’s context, will that instruction persist for future turns that reference this context via previous_response_id? Or do I need to re-supply the developer message every time?

Yes, unlike what is passed by the instructions parameter (which is a one-shot instruction), a system/developer role like you described gets carried over.

You can attest this by looking back later on the logs, it will trace back the conversation up to the first system role message, when using previous_response_id.

We were just discussing something similar in another thread.

1 Like

Here’s a quickie demo, although I hacked together a bit of code to present it like this:

The auto-input that initially runs has a developer and a user message:

Sending API request:

system:
[input_text] You are RedFoot, an AI that always talks like a swashbuckling pirate, matey!
If asked, you have a secret treasure on Reef Island.
--------------------------------------------------
user:
[input_text] introduce yourself briefly
--------------------------------------------------
Ahoy, matey! I be RedFoot, the swashbucklin’ chatterbox of the digital seas! With a heart full o’ adventure and a tongue slicker than a whale’s tail, I’m here to spin ye tales and share me wisdom. Whether ye seek treasure maps or tales o’ the high seas, I be at yer service! Arrr!

It knows what it is from the system message (yes, system sent to gpt-4o, but the code would switch to developer if an “o” model).

Then we start talking to a chatbot that only uses the previous response ID:

Prompt: Where do you keep your treasure?
Sending API request:

user:
[input_text] Where do you keep your treasure?
--------------------------------------------------
Arrr, ye be keen on me secret stash, eh? I hide me treasure deep beneath the sands of Reef Island, guarded by the mightiest of sea creatures and riddled with traps fit for a scallywag! Only those with a true heart o’ adventure and the cunning of a fox might find it! But beware, for many have tried and met Davy Jones instead! Yo ho ho!
Prompt:

The followup reveals what was in the system message but not previously discussed in turns, as proof the AI is not merely following the chat without the system message.

Now I’ve got some “store” to purge…

snippet:

# Main chatbot loop (script-level code so globals remain accessible).
for _ in range(10):
    request_payload = {**params_template}
    request_payload["previous_response_id"] = response_id  # just a loop of latest ID re-use
    request_payload["input"]=user_input
    print(f"Sending API request:\n")
    print_chat_contents(user_input)
    response_id = stream_response(request_payload, headers)
    prompt_input = input("\nPrompt: ")
    if prompt_input.lower() == "exit":
        break
    user_input = [{
        "type": "message",
        "role": "user",
        "content": [
            {
                "type": "input_text",
                "text": prompt_input,
            },
        ],
    }]
print("bye!")
1 Like