Feature Proposal: Model Selection Per Conversation
Currently, ChatGPT allows users to switch between models (e.g., GPT-4, GPT-3.5, or lighter models like o3-mini), but this setting applies globally across all conversations. A more efficient approach would be to allow users to assign a specific model to each conversation, ensuring that different chats maintain the appropriate AI capabilities based on the context.
Why This Would Be Useful
1. Optimized Responses Based on Context
• Conceptual discussions → Use GPT-4 for in-depth analysis and nuanced reasoning.
• Coding or quick responses → Use GPT-3.5 or o3-mini for faster and more cost-effective processing.
2. Consistency Within Conversations
• Once a model is selected for a conversation, it remains consistent throughout the chat.
• Prevents unintentional model switching, ensuring a coherent discussion flow.
3. Better Resource Management
• Users could strategically allocate models based on their needs, avoiding unnecessary costs or slowdowns when a simpler model would suffice.
How It Could Work
• Model Selection When Starting a New Chat: Users choose a model at the beginning of a conversation.
• Persistent Model per Conversation: Each chat retains its assigned model until a new conversation is started.
• Model Indicator: A small label showing which model is active for a given conversation to avoid confusion.