/v1/completions endpoint not supported by gpt-4o, gpt-4o-mini

Hello there,
/v1/completions is supposed to be supported by the batching API (… and ‘/v1/completions’ …) for these models (… gpt-4o, gpt-4o-mini, …).
Yet when creating batches it rejects the request with the following error:

The provided model ‘gpt-4o-mini’ with endpoint ‘/v1/completions’ is not supported by the Batch API.

(Exact same error for gpt-4o model)

I am currently Tier1, which means I should be eligible for batching

I am not entirely sure if this an user error, I’m not allowed to make those requests or a bug on the API’s side, so any help is appreciated.

Completions is a special legacy endpoint only used with particular older models that employ direct textual entailment, such as gpt-3.5-turbo-instruct.

The correct base URL for chat completions is what is required. For the batch file, /v1/chat/completions, or for making the direct RESTful calls, `https://api.openai.com/v1/chat/completions

You also then must construct real JSONL entries that could have been sent to that endpoint as http body.

Alright thanks, It worked fine with the chat endpoint but I wanted to use the legacy completion endpoint instead. Well I guess the Batch API doc could need some work then.