How to add ```
reasoning_effort=“high”
in the batchapi format
{"custom_id": "request-1", "method": "POST", "url": "/v1/chat/completions", "body": {"model": "gpt-3.5-turbo-0125", "messages": [{"role": "system", "content": "You are a helpful assistant."},{"role": "user", "content": "Hello world!"}],"max_tokens": 1000}}
{"custom_id": "request-2", "method": "POST", "url": "/v1/chat/completions", "body": {"model": "gpt-3.5-turbo-0125", "messages": [{"role": "system", "content": "You are an unhelpful assistant."},{"role": "user", "content": "Hello world!"}],"max_tokens": 1000}}
also recently i got the following error for o3-mini
{"id": "batch_req_67b4701e8de48190ad2d4c373cd5720f", "custom_id": "100024", "response": {"status_code": 400, "request_id": "406b05a6bb0add15edb924fdc385d306", "body": {"error": {"message": "Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead.", "type": "invalid_request_error", "param": "max_tokens", "code": "unsupported_parameter"}}}, "error": null}
is there any updated format for o3-mini model in batch-api
1 Like
The error is related to the max_tokens
parameter, the correct parameter is max_completion_tokens
.
As for the reasoning_effort="high"
I’m not sure. Do check the documentation! Maybe it will work when you use the correct max_completion_tokens
parameter?
2 Likes
Submitted a batch request, Will update here if it works.
2 Likes
It worked, adding the reasoning_effort=“high” after the max_completion_tokens
here is the output:
{“id”: “batch_req_67b54e5feec0819089991fe73dcd30c2”, “custom_id”: “100024”, “response”: {“status_code”: 200, “request_id”: “51f4a3ea60cdad08f9acf1c0f6d31673”, “body”: {“id”: “chatcmpl-B2THgdBFzBzYrHyX1CkWWF6YJ26VE”, “object”: “chat.completion”, “created”: 1739928924, “model”: “o3-mini-2025-01-31”, “choices”: [{“index”: 0, “message”: {“role”: “assistant”, “content”: “”, “refusal”: null}, “finish_reason”: “stop”}], “usage”: {“prompt_tokens”: 36338, “completion_tokens”: 7777, “total_tokens”: 44115, “prompt_tokens_details”: {“cached_tokens”: 0, “audio_tokens”: 0}, “completion_tokens_details”: {“reasoning_tokens”: 5312, “audio_tokens”: 0, “accepted_prediction_tokens”: 0, “rejected_prediction_tokens”: 0}}, “service_tier”: “default”, “system_fingerprint”: “fp_ef58bd3122”}}, “error”: null}
3 Likes