Thanks for your input, I’ve started from this, because found this advise in chat here - created a new project and new API keys and double checked if I have models in limits allowed - but it don’t work. Partially issue was resolved by putting gpt-5-mini-2025-08-07 instead of gpt-5-mini in batch requests via responses endpoint. But still this is not normal workflow.
There is a persistent issue that OpenAI has had with project provisioning, affecting the batch endpoint.
If you see “-batch” appended to the model name, the likely self-service solution is to create a newly-generated project to run your batches with, with its own API key being necessary.
Reminder that server-side artifacts are unexpectedly and incompletely “scoped” to a project ID, so moving more “Responses” calls against asset ID that exist can be problematic if you use any of the offered services that collect, contain, and imprison your application data.
Jumping in here quickly to see if I can offer any help.
For anyone blocked, can you try using the snapshot model name first (e.g., gpt-5-mini-2025-08-07) when calling the batch/Responses endpoints? If that doesn't work, create a new project and a fresh API key and retry. If you still get an error, please write into support@openai.com with the failing request_id, the project id, and exact timestamps so we can escalate with logs and the batch team. If you have any issues reaching support, let us know here and I'll follow up.