Concerns regarding GPT-4 retirement and "System Prompt" alignment issues in new models

I am writing as a long-term Plus subscriber who relies on the specific reasoning capabilities and professional tone of GPT-4. The upcoming retirement of GPT-4 poses a significant issue for my workflow.

I’ve noticed that newer models (GPT-4o/o1/upcoming 5.0) suffer from “over-alignment” (being too moralizing/polite), which interferes with complex tasks. Even with Custom Instructions, the models tend to be overly restrictive compared to the original GPT-4.

Will there be a “Legacy Mode” or a way to opt-out of the “hyper-friendly” tone to maintain the directness we had in GPT-4? This is a request for maintaining model diversity for professional use cases.

This topic was automatically closed after 17 hours. New replies are no longer allowed.