Batch request saved prompts

Your one-line shape had a bare ... as text.format, and then an extra trailing curly brace. So to sort of see what you are sending now:

{
  "custom_id": "e90ce987-d6e1-4a01-9fb7-e783ec9e89e9",
  "method": "POST",
  "url": "/v1/responses",
  "body": {
    "prompt": {
      "id": "{promptId}",
      "version": "4",
      "variables": {
        "variable1": "..."
      }
    },
    "input": [
      {
        "role": "user",
        "content": [
          {
            "type": "input_text",
            "text": "..."
          }
        ]
      }
    ],
    "text": {
      "format": {}
    }
  },
  "reasoning": [],
  "max_output_tokens": 2048,
  "store": true,
  "include": [
    "web_search_call.action.sources"
  ]
}

What I spot: "id": "{promptId}", - if that’s supposed to be a variable placed in a Python string, you’re missing it being an f-string. Look at your JSONL.

“reasoning” is an object, not an array, such as:
"reasoning": {"effort": reasoning_effort, "summary": reasoning_summary} - look at your real JSONL

If you got this out of the playground and aren’t using tools, you can drop the “include” it gave you uselessly. Of course, the prompts idea is flawed because there are other “includes” that require knowledge of what’s in a prompt ID that can’t be retrieved, just like this.

The batch endpoint SHOULD take anything you can send to the API in a raw JSON request as “body”. There is no schema reference we can refer to outside of the Responses API Reference to see why what you sent could be wrong.

Take a “body” object right out of your JSONL and send it to the API as JSON body with a raw http call to see if it is mis-shapen.

The below posting reports success. The “model” field would have to be optional (as it is), and thus, this seems like a regression on the batch endpoint.

We can’t blame it on “model” if the validation never retrieved a model from the working prompt id, which is a requirement being saved into a prompt when you create a prompt in the platform site. So it must be the batch->/v1/responses validation as a whole requiring a model which is optional.

Conclusion: bug :beetle:

2 Likes