Maximum token allowed for chat gpt model gpt 3.5 turbo

We are trying to use chat gpt model 3.5 turbo for chat completion rest APIs in our application. We are not sure about the maximum token limit (Request + Response) for this chat gpt model using chat completion API. Could anyone please let me know the maximum token limit using chat completion API.

Below is the chat completion API link:

https://platform.openai.com/docs/api-reference/chat/create

Hope this helps. Link is below.

Reference: https://platform.openai.com/docs/models/gpt-3-5-turbo

2 Likes

it’s worth highlighting that the default alias gpt-3.5-turbo is about to get upgraded to gpt-3.5-turbo-0125 within a few days (Feb 16th) and will therefore have a 16k input window, so you may not have to update your model name.

1 Like

Do you know if the 16k tokens of context window that they talk about for the gpt-3.5-turbo-0125 include the 4k tokens of output?

1 Like