Not able to request batch api of GPT 5.2

I have created new api key in project and enabled GPT 5.2 it have chat-completions and responses access for gpt-5.2, but Batch API requests on endpoint fail with:

 Project does not have access to model `gpt-5.2-2025-12-11-batch`

Is there any setting I am missing? Please help me in resolving this.

This fault with a model name ending in -batch is not your fault. It continues to be an issue with OpenAI popping up on a semi-regular basis.

A mitigation you can try is to use a different project or new project and its associated API key to submit the batch job and file upload.

Also, you can try against either Responses or Chat Completions if it suits your use, and see if the same model name issue rears its head regardless of the job endpoint.

Good luck.

I tried this as well, but facing the same error. Tried changing the end point from chat completion to others as well but no luck.

Was this a status that was immediately received as a response to job submission, or was it the final report of the batch job?

Reproduction?

[1/5] JSONL file written: repro_batch_input.jsonl
      Contents:
      {"custom_id": "repro-request-1", "method": "POST", "url": "/v1/responses", "body": {"model": "gpt-5.2", "max_output_tokens": 64, "input": "Reply with exactly one sentence: what is the capital of France?"}}

[2/5] Uploading repro_batch_input.jsonl to Files API …

────────────────────────────────────────────────────────────────────────
  Files API – upload response
  HTTP 200  OK
────────────────────────────────────────────────────────────────────────
{
  "object": "file",
  "id": "file-PqZwFghKRKVKJt5XoDss8c",
  "purpose": "batch",
  "filename": "repro_batch_input.jsonl",
  "bytes": 207,
  "created_at": 1777384613,
  "expires_at": 1779976613,
  "status": "processed",
  "status_details": null
}

      Uploaded file ID: file-PqZwFghKRKVKJt5XoDss8c

[3/5] Creating batch job (endpoint=/v1/responses, model=gpt-5.2) …

────────────────────────────────────────────────────────────────────────
  Batches API – create response
  HTTP 200  OK
────────────────────────────────────────────────────────────────────────
{
  "id": "batch_69f0bca659cc8190bd3b6f4abc6d30e4",
  "object": "batch",
  "endpoint": "/v1/responses",
  "model": null,
  "errors": null,
  "input_file_id": "file-PqZwFghKRKVKJt5XoDss8c",
  "completion_window": "24h",
  "status": "validating",
  "output_file_id": null,
  "error_file_id": null,
  "created_at": 1777384614,
  "in_progress_at": null,
  "expires_at": 1777471014,
  "finalizing_at": null,
  "completed_at": null,
  "failed_at": null,
  "expired_at": null,
  "cancelling_at": null,
  "cancelled_at": null,
  "request_counts": {
    "total": 0,
    "completed": 0,
    "failed": 0
  },
  "usage": {
    "input_tokens": 0,
    "output_tokens": 0,
    "total_tokens": 0,
    "input_tokens_details": {
      "cached_tokens": 0
    },
    "output_tokens_details": {
      "reasoning_tokens": 0
    }
  },
  "metadata": {
    "description": "Bug repro \u2013 gpt-5.2 batch model-name exposure"
  }
}

      Batch ID : batch_69f0bca659cc8190bd3b6f4abc6d30e4
      Status   : validating

[4/5] Polling status every 60s …

  [06:56:55] Poll #  1  status=validating    total=0  completed=0  failed=0
  [06:57:56] Poll #  2  status=validating    total=0  completed=0  failed=0

Success Result

{"id": "batch_req_69f0bd1bfd0c81908abc6f4c88ed8c40", "custom_id": "repro-request-1", "response": {"status_code": 200, "request_id": "51d2a899-6032-4205-a500-cf5738640d0e", "body": {"id": "resp_0672a70fd833dc4b0069f0bcf0668481a08623b7b2cc64efa7", "object": "response", "created_at": 1777384688, "status": "completed", "background": false, "billing": {"payer": "openai"}, "completed_at": 1777384689, "error": null, "frequency_penalty": 0.0, "incomplete_details": null, "instructions": null, "max_output_tokens": 64, "max_tool_calls": null, "model": "gpt-5.2-2025-12-11", "moderation": null, "output": [{"id": "msg_0672a70fd833dc4b0069f0bcf1189c81a08c6cf9f659940e39", "type": "message", "status": "completed", "content": [{"type": "output_text", "annotations": [], "logprobs": [], "text": "The capital of France is Paris."}], "role": "assistant"}], "parallel_tool_calls": true, "presence_penalty": 0.0, "previous_response_id": null, "prompt_cache_key": null, "prompt_cache_retention": "in_memory", "reasoning": {"effort": "none", "summary": null}, "safety_identifier": null, "service_tier": "default", "store": true, "temperature": 1.0, "text": {"format": {"type": "text"}, "verbosity": "medium"}, "tool_choice": "auto", "tools": [], "top_logprobs": 0, "top_p": 0.98, "truncation": "disabled", "usage": {"input_tokens": 19, "input_tokens_details": {"cached_tokens": 0}, "output_tokens": 11, "output_tokens_details": {"reasoning_tokens": 0}, "total_tokens": 30}, "user": null, "metadata": {}}}, "error": null}

It seems that it is not platform-wide. You can persist with different β€œproject” methods, such as default org project with no special permissions or limits, or even a legacy user key if your org has that by having used them in the past.

You will need to contact via help.openai.com and see if there is an org provisioning issue causing the -batch suffix to be exposed as an error result, including similar information for action about the job ID or error status of the call or the job as I show. In your message, give the bot or front line support nothing to ask about further, so that your message will result in escalation to staff.

Similar case, among many, demonstrating a fault with the way the batch endpoint works, that goes way back.

Got it, thanks for the pointer. I have started a thread with our devops team and openai, hope it gets resolved.