For the gpt-3.5-turbo, text-davinci-003 and text-davinci-002 models, what are the default values for each parameter?
In python is it indifferent to make a request by JSON or dict?
For the gpt-3.5-turbo, text-davinci-003 and text-davinci-002 models, what are the default values for each parameter?
In python is it indifferent to make a request by JSON or dict?
Welcome @tiago.feitoza
You’ll find all of them in the API Reference
I want to complete the tables and I can’t find all the data in the documentation.
Python v1/chat/completions | ||||
---|---|---|---|---|
Attributo | Tipo | Status | Default | Limit |
frequency_penalty | number | Optional | 0 | -2,0 ~ 2,0 |
logit_bias | map | Optional | null | {?:?} |
max_tokens | integer | Optional | Defaults to inf | ? |
messages | array | Required | ? | ? |
model | string | Required | ? | ? |
n | integer | Optional | 1 | ? |
presence_penalty | number | Optional | 0 | -2,0 ~ 2,0 |
stop | string or array | Optional | null | and [,] |
stream | boolean | Optional | false | false or true |
temperature | number | Optional | 1 | 0 ~ 2 |
top_p | number | Optional | 1 | 0 ~ 100 |
user | string | Optional | ? | ? |
Python v1/completions | ||||
---|---|---|---|---|
Attributo | Tipo | Status | Default | Limit |
best_of | integer | Optional | 1 | 1 ~ ? |
echo | boolean | Optional | false | |
frequency_penalty | number | Optional | 0 | -2,0 ~ 2.0 |
logit_bias | map | Optional | null | {“”: -100 ~ 100} |
logprobs | integer | Optional | null | 1 ~ 5 |
max_tokens | integer | Optional | 16 | 0 ~ ? |
model | string | Required | ? | * |
n | integer | Optional | 1 | 1~ |
presence_penalty | number | Optional | 0 | -2,0 ~ 2.0 |
prompt | string or array | Optional | ? | ? |
stop | string or array | Optional | null | ~ [,] |
stream | boolean | Optional | false | false or true |
suffix | string | Optional | null | |
temperature | number | Optional | 1 | 0 ~ 2 |
top_p | number | Optional | 1 | 0 ~ 100 |
user | string | Optional | ? | ? |
I would like this information to check before sending a request, overwriting the default parameters with the wrong type or limit value
I have built “default” requests as follows:
"api_chat_completion": {
"frequency_penalty": 0,
"log_bias": {},
"max_tokens": 256,
"messages": [
{
"role": "user",
"content": "select_prompt + input_field"
}
],
"model": null,
"n": 1,
"presence_penalty": 0,
"stop": "",
"stream": false,
"temperature": 0.2,
"temperature_decay_rate": 0,
"top_p": 1,
"user": ""
},
"api_completion": {
"best_of": 1,
"echo": false,
"frequency_penalty": 0,
"logit_bias": {},
"logprobs": null,
"max_tokens": 256,
"model": null,
"n": 1,
"presence_penalty": 0,
"prompt": "",
"stop": "",
"stream": false,
"suffix": "",
"temperature": 0.2,
"temperature_decay_rate": 0,
"top_p": 1,
"user": ""
},