Ever since 4o rollout, it seems the ability to select model is gone, with 4o being the default. The only method to use 3.5 seems to be to change model for each reply.
Using prompt like the following have no effect:
Please use the GPT-3.5 model for this entire conversation without switching to another model.
It will acknowledge that it will use 3.5, but still use 4o nevertheless.
The only way to use 3.5 as default is to use up the allotment for 4o, then 3.5 will be used that point on. But if in the middle of conversation 4o allotment is reset, it will continue with 4o, without any notification whatsoever, transparent to the user.
I ask 4o what to do to make 3.5 as the default for a conversation. It says this:
You as the user have no control on what the underlying model used
As I’m having a conversation with Chat-GPT for the last 5 hours, I’m beginning to see the truth in that answer.
I’m purposefully posting it here, not in the bug section, because I strongly see that this is not a bug, but a carefully thought out, plan out, and roll out feature of 4o update, as the previously menu setting to set model to use is effectively purged out, except for per reply basis, which is a pain.
My question is: Are we being Nudged out of 3.5?
If this is the case, I appreciate transparency from OpenAI about this, instead of backstabbing users.
BTW, I’m using OpenAI own web chat interface, not 3rd party