Hi
I have developed prompts in the OpenAI platform to generate educational content based on inputted topics. These prompts are published, and I can call them successfully with the /v1/responses API with a body similar to:
{
"prompt": {
"id": "...",
"version": "4"
},
"input": [
{
"role": "user",
"content": [
{
"type": "input_text",
"text": "..."
}
]
}
],
"text": {
"format": {
// ...
}
},
"reasoning": {},
"max_output_tokens": 2048,
"store": true,
"include": ["web_search_call.action.sources"]
}
We now need to generate content in bulk, and so I am interested in using the batch API to perform this multiple requests to the same prompt.
However, I receive validation errors when submitting a jsonl batch file, similar to:
{"custom_id":"e90ce987-d6e1-4a01-9fb7-e783ec9e89e9","method":"POST","url":"/v1/responses","body":{"prompt":{"id":"{promptId}","version":"4","variables":{"variable1":"..."}},"input":[{"role":"user","content":[{"type":"input_text","text":"..."}]}],"text":{"format":{...}}},"reasoning":[],"max_output_tokens":2048,"store":true,"include":["web_search_call.action.sources"]}}
The error I receive is:
Line 1 Model parameter is required.
I am finding conflicting information on whether stored prompts can be referenced in this way using the batch API. Is this possible?
If so, are there any obvious issues with what I am doing?
Thank you in advance for your help.