Batch request saved prompts

Hi

I have developed prompts in the OpenAI platform to generate educational content based on inputted topics. These prompts are published, and I can call them successfully with the /v1/responses API with a body similar to:

{
  "prompt": {
    "id": "...",
    "version": "4"
  },
  "input": [
    {
      "role": "user",
      "content": [
        {
          "type": "input_text",
          "text": "..."
        }
      ]
    }
  ],
  "text": {
    "format": {
        // ...
    }
  },
  "reasoning": {},
  "max_output_tokens": 2048,
  "store": true,
  "include": ["web_search_call.action.sources"]
}

We now need to generate content in bulk, and so I am interested in using the batch API to perform this multiple requests to the same prompt.

However, I receive validation errors when submitting a jsonl batch file, similar to:

{"custom_id":"e90ce987-d6e1-4a01-9fb7-e783ec9e89e9","method":"POST","url":"/v1/responses","body":{"prompt":{"id":"{promptId}","version":"4","variables":{"variable1":"..."}},"input":[{"role":"user","content":[{"type":"input_text","text":"..."}]}],"text":{"format":{...}}},"reasoning":[],"max_output_tokens":2048,"store":true,"include":["web_search_call.action.sources"]}}

The error I receive is:

Line 1 Model parameter is required.

I am finding conflicting information on whether stored prompts can be referenced in this way using the batch API. Is this possible?

If so, are there any obvious issues with what I am doing?

Thank you in advance for your help.

Your one-line shape had a bare ... as text.format, and then an extra trailing curly brace. So to sort of see what you are sending now:

{
  "custom_id": "e90ce987-d6e1-4a01-9fb7-e783ec9e89e9",
  "method": "POST",
  "url": "/v1/responses",
  "body": {
    "prompt": {
      "id": "{promptId}",
      "version": "4",
      "variables": {
        "variable1": "..."
      }
    },
    "input": [
      {
        "role": "user",
        "content": [
          {
            "type": "input_text",
            "text": "..."
          }
        ]
      }
    ],
    "text": {
      "format": {}
    }
  },
  "reasoning": [],
  "max_output_tokens": 2048,
  "store": true,
  "include": [
    "web_search_call.action.sources"
  ]
}

What I spot: "id": "{promptId}", - if that’s supposed to be a variable placed in a Python string, you’re missing it being an f-string. Look at your JSONL.

“reasoning” is an object, not an array, such as:
"reasoning": {"effort": reasoning_effort, "summary": reasoning_summary} - look at your real JSONL

If you got this out of the playground and aren’t using tools, you can drop the “include” it gave you uselessly. Of course, the prompts idea is flawed because there are other “includes” that require knowledge of what’s in a prompt ID that can’t be retrieved, just like this.

The batch endpoint SHOULD take anything you can send to the API in a raw JSON request as “body”. There is no schema reference we can refer to outside of the Responses API Reference to see why what you sent could be wrong.

Take a “body” object right out of your JSONL and send it to the API as JSON body with a raw http call to see if it is mis-shapen.

The below posting reports success. The “model” field would have to be optional (as it is), and thus, this seems like a regression on the batch endpoint.

We can’t blame it on “model” if the validation never retrieved a model from the working prompt id, which is a requirement being saved into a prompt when you create a prompt in the platform site. So it must be the batch->/v1/responses validation as a whole requiring a model which is optional.

Conclusion: bug :beetle:

2 Likes

Thank you for your reply, @_j.

I should have been clearer in my original post: some of the detail was omitted just for brevity and not wanting to share the prompt identifier.

I’ve created a simpler prompt to minimise chances of a malformed request being responsible for the failure. Here’s what I’ve found:

The following body posted directly to /v1/responses works as expected:

{"model":"gpt-4o-mini","input":[{"role":"system","content":[{"type":"input_text","text":"Whatever prompt I provide, you will reply \"pong\"."}]}],"text":{"format":{"type":"text"}},"reasoning":{},"tools":[],"temperature":1,"max_output_tokens":2048,"top_p":1,"store":true}

Wrapping this into a batch request in JSONL also works: validation passed and is now processing.

I then publish a the prompt. Running this directly to /v1/responses works as expected:

{"prompt":{"id":"pmpt_691def08196881909e1cbbf9e591f2080ee9238bf8983775","version":"1"},"input":[{"role":"user","content":[{"type":"input_text","text":"Ping"}]}],"text":{"format":{"type":"text"}},"reasoning":{},"max_output_tokens":2048,"store":true}

Wrapping this into a batch request as follows does not work:

{"custom_id":"fb47b2ad-f615-4c19-a312-495c8011c3d2","method":"POST","url":"/v1/responses","body":{"model":"gpt-4o-mini","input":[{"role":"system","content":[{"type":"input_text","text":"Whatever prompt I provide, you will reply \"pong\"."}]}],"text":{"format":{"type":"text"}},"reasoning":{},"tools":[],"temperature":1,"max_output_tokens":2048,"top_p":1,"store":true}}

As before, the validation error message returned is:

Line 1 Model parameter is required.

I think that is enough to support your theory that there is a regression on this endpoint around validation of the optional model property.

I then tested the same batch request with the model property added:

{"custom_id":"fb47b2ad-f615-4c19-a312-495c8011c3d2","method":"POST","url":"/v1/responses","body":{"model":"gpt-4o-mini","prompt":{"id":"pmpt_691def08196881909e1cbbf9e591f2080ee9238bf8983775","version":"1"},"input":[{"role":"user","content":[{"type":"input_text","text":"Ping"}]}],"text":{"format":{"type":"text"}},"reasoning":{},"max_output_tokens":2048,"store":true}}

It works!

So, in summary, it seems as though making requests to stored prompts in batches does work, but requires the model property to be sent.

I experimented further and found that if you do submit a model as part of the request directly to /v1/responses, this takes precedence over what is stored in the prompt. Omitting this reverts to the model saved against the prompt.

Thanks you for your help and input - it has given me a workable solution to move forward with.

2 Likes