Hello,
When I set the reasoning_effort parameter in requests to gpt-5-search-api via the ChatCompletions endpoint:
oa = openai.OpenAI()
oa.chat.completions.create(
model="gpt-5-search-api",
reasoning_effort="high",
messages=[
{"role": "user", "content": "Hi!"}
]
)
I get:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\codeofdusk\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\_utils\_utils.py", line 286, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\codeofdusk\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\resources\chat\completions\completions.py", line 1156, in create
return self._post(
^^^^^^^^^^^
File "C:\Users\codeofdusk\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\_base_client.py", line 1259, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\codeofdusk\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\_base_client.py", line 1047, in request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': 'Unrecognized request argument supplied: reasoning_effort', 'type': 'invalid_request_error', 'param': None, 'code': None}}
Which seems like a bug (this works fine in responses, using reasoning={"effort": "high"}). Is there an alternative way to set this for the search-capable model?
Thanks in advance!


