Good point. Ultimately, as you have said, these kind of issues seem to be part of the divergence between cGPT and iGPT. I like your solution of catching a generic response and re-attempting with iGPT. I have also seen that increasing temperature helps (although haven’t tested myself)
I find cGPT to be nothing more than a safe, surface level conversation tool, and am already using Davinci & other models to perform deeper operations such as conversation & context management. Hopefully we see a price reduction for Davinci as well. And as you have mentioned, the ability to fine-tune cGPT will be very useful.
I recall the docs stating: “cGPT should be used for operations for now” based on the price point. It’s hard to justify spending 10x more.