Hi OpenAI Team,
I’m a big fan of ChatGPT (so much so that I even enlisted its help to draft this message and check whether this feature has already been requested!). It looks like while there are discussions about reducing costs through model selection and fine-tuning, there isn’t a specific request for automatic model selection to balance cost and performance.
I’d love to see a setting that optimizes when the most advanced model (ChatGPT-4o) is used. Right now, it’s selected automatically, but many of my queries don’t require that level of power. A feature that prioritizes a lighter model for general tasks—while reserving 4o for complex problems—would help reduce costs and improve efficiency.
It would be great to have an option that either lets ChatGPT decide when 4o is truly necessary or allows users to toggle this behavior. Is this something OpenAI might consider adding?
Thanks for all the amazing work you do!