Insights on ChatGPT Enterprise Using GPT-4-1106-Preview Based on Context Length Specifications

Given that gpt-4-1106-preview (aka gpt-4-turbo) is a reduced-expense model, has the same “lazy” seen in ChatGPT as in direct specification of that model by API, and has been trained on the skills of parallel tool calls required for the retrieval function of agents, it is likely that all ChatGPT is using the latest API model (or a variant even beyond that).

“Context length” thus being then just the input size that the special management endpoint and client software is informed of for conversation and data file length for an account type, by a confused max_tokens specification.

(ps, OpenAI, misspelled bismo_settings?)

1 Like