It seems to me that this information is not available on OpenAI Platform
We think it’s a custom version of turbo model… You can also choose GPT-4 with Plus…
Thank you for your effort. I will keep on trying to find the official information, though.
The model used is likely only differently-trained than one available by api if you enable plugins.
Since the AI’s language format of plugins, is not documented, and the endpoint may inject and expect tokens that we can’t replicate, it is hard to test this theory by stimulating the standard API model the same way as were plugins shown enabled.
It is clear that code interpreter (advanced data analysis) gets a different AI model, and so does API function calling when used.
I mean in the chat interface on chat openai com there are GPT-3.5 and GPT-4 available and I wonder what models exactly they are to determine their context windows.
Yes, the context lengths are the same. 1536 is the max_tokens
reserved for writing a response.
thank you. can you provide a source of this information?
From it dumping out on plugin bugs for many