Does the API consider optional parameters not sent to the API as default values?


I would like to know about the following.
When sending to OpenAI’s API, if the prompt’s optional parameters are blank, will the gpt model respond with that parameter as the default?

For example, if a model is gpt-3.5-turbo-16k. When I send the model without setting temperature and presence_penalty parameter values as below json, does the gpt-3.5-turbo-16k complete the temperature and presence_penalty as default values?

response = openai.ChatCompletion.create(
        model = "gpt-3.5-turbo-16k",
        messages = [
            {"role": "user", "content": "Hello!"}

Thank you.

1 Like

That is how optional parameters work—if they’re not set they are assigned a default value.


python library defaults?

More specifically, when using the openai python library module, one might wonder if the other optional parameters could be filled in with default values by the function.

We examine a very small API request sent by python:

— POST —
b'{"messages": [{"role": "user", "content": "Hi"}], "model": "gpt-3.5-turbo"}'
— end POST —
api response time: 0.8338 seconds
response:Hello! How can I assist you today?

The message byte string (grabbed right before it is transmitted) shows that nothing unspecified is added (the API key is out-of-band).

model API defaults

One might wonder about the default values used by models. The answer: they are documented here.

temperature and top-p is not precisely discoverable by API use, but are 1.0 per documentation.

The default behavior when max_tokens is not specified is quite different: all remaining context space after the supplied input can be used for completion – but also no space is specifically reserved. You could fill up the context with input and have little space left for generating an answer.

Which is discoverable by API:

api response time: 133.7922 seconds
“finish_reason”: “length”
“usage”: {
“prompt_tokens”: 19,
“completion_tokens”: 4078,
“total_tokens”: 4097

This is different than the max_tokens default used by completion models’ endpoint, which is 16 tokens of output — making setting max_tokens (and sometimes calculation of a appropriate value) almost a requirement.


Thank you for your reply.
I again realized the work of the default value of the optional parameter.
Thank you for the explaination. I learned a lot from it.

Thank you for your reply.
I misunderstood that the Python library set the unset parameter to the default value and sent it to the API.
Also, I didn’t know about the max_tokens default for completion models’ endpoint.
Thank you for the explaination. I learned a lot from it.