There is an error in the Open AI Model documentation page

I found what it seems to be an error in the Open AI Model documentation page. In the GPT 3.5 section of models page (OpenAI Platform), the model gpt-3.5-turbo-16k says it points to gpt-3.5-turbo-0613, which seems to not be correct. The correct would be "… Currently points to gpt-3.5-**turbo-16k**-0613 ", otherwise there wouldn´t be any difference between the models gpt-3.5-turbo and gpt-3.5-turbo-16k.

1 Like

i think the 16k is the new base model.

They announced this at devday

Okay @maister, and how is that related to the June model?

It will be wrong for another 25 days, after which we might assume either alias with or without 16k gets you to the single new model.

However, even gpt-3.5-turbo-1106, like its GPT-4-turbo cohort, is now noted as “only returns 4k max tokens output”. So, application-breaking — if you were able to get the current `gpt-3.5-turbo-16k to do productive generation past 4k and had set the max_tokens to match:

gpt-3.5-turbo (expected)

openai.BadRequestError: Error code: 400 - {‘error’: {‘message’: “This model’s maximum context length is 4097 tokens. However, you requested 6024 tokens (24 in the messages, 6000 in the completion). Please reduce the length of the messages or completion.”

gpt-3.5-turbo-1106 (was illogical and unexpected, but now documented)

openai.BadRequestError: Error code: 400 - {‘error’: {‘message’: 'max_tokens is too large: 6000. This model supports at most 4096 completion tokens, whereas you provided 6000.

gpt-3.5-turbo-16k (today, anyway)

(completion_tokens=69, prompt_tokens=24, total_tokens=93)

So -16k has a good reason to not change, until they turn it off.


Documentation error:

Wiping the prices of current models off the price list.