Context window size for the babbage-002 model

Hi everyone,

I’d appreciate your help. Could someone please clarify the maximum number of tokens in the context window for the babbage-002 model? The Models page states that the maximum tokens are 16,384, while the Fine-tuning page says, “The legacy prompt completion pair data format has been retained for the updated babbage-002 and davinci-002 models to ensure a smooth transition. The new models will support fine-tuning with a 4k token context …”, which seems to imply a 4k maximum context for the babbage-002 model. Meanwhile, the Playground limits the ‘maximum length’ to 2049 tokens for babbage-002.

So, which one is it?

1 Like

Welcome to the Forum!

Given that it has been specifically addressed in the fine-tuning Q&A, I would go with the guidance from there.

I believe there is an issue/glitch in the playground as the maximum token value seems to have a different meaning in the playground depending on the selected model - so would not consider this a reliable source at the moment. What I assume is that in the case of babbage-002 the 2,049 token limit in the playground refers to the maximum output tokens.

1 Like

Playground states that a maximum of 2049 tokens are allocated for both the prompt and response together for the babbage-002 model. Could the context window be 4k for fine-tuning and 16k for the API? A clarification from OpenAI regarding the different maximum context lengths used for the babbage-002 model in various parts of their documentation and Playground would be appreciated.

1 Like

Yes, that’s my interpretation of the information as well.

Agree it could be made clearer across the different docs and platforms.