Davinci still seems like the gold standard, compared to turbo

I figured out you are using max_tokens 4000. Try reducing it to get the
system + previous_conversation + response < 4000

I will try that too, however, I also tried setting max_token: null, that is, using default value (which I am not sure what it is) and the result was the same Bad Request.

If you set max_token:null this will overwrite the client max_token value the client sends to the API, but the OpenAI hard limit of 4096 tokens cannot be bypassed or overridden by any client method.

HTH

:slight_smile:

See also related:

1 Like