Changing the model mid-conversation

For a day or two I had a dropdown that let me rerun a prompt with a different model. It was really useful. Instead of having to choose a model in advance, and usually defaulting to GPT4, I started choosing GPT3.5, and only running a prompt with GPT4 if 3.5 couldn’t figure it o ut. It was a much nicer balance, letting me avoid wasting GPT4 quota (and compute) due to having to choose the model in advance and giving me a reason to choose 3.5.

It’s gone away, though. Not sure if I was just being A/B’d or if it was disabled for some technical reason, but now I’m missing it. I’d already begun to adjust my usage habits around it and I have a number of GPT3.5 chats that I’ll have to recreate as 4.

It didn’t seem quite finished. You couldn’t change the default model for a chat, only rerun a single prompt, which means if you’re already on GPT4 it didn’t really work: you can’t run a 3.5 query without first running it with GPT4. It also kept asking me whether the GPT4 response was better than 3.5. I’m guessing it’s just an automatic prompt any time there’s a regeneration, but it’d make more sense to skip it in this case and give us an option in the thumbs-down menu if we want to point out a case where GPT4 did worse.

Hope it comes back.