I’d like to upgrade to one of the less expensive turbo models but I’m not sure which to use. From the docs, it says that:
gpt-4-1106-preview GPT-4 Turbo model featuring improved instruction following, JSON mode, reproducible outputs, parallel function calling, and more. Returns a maximum of 4,096 output tokens. This preview model is not yet suited for production traffic
At the same time, there’s a new Turbo model that was just launched which doesn’t have the warning:
The latest GPT-4 model intended to reduce cases of “laziness” where the model doesn’t complete a task.
Is there any guidance in terms of which could/should be used in a production environment?