I was really hoping there would be positive updates to these threads, but alas it just keeps getting worse. GPT4 has been degraded so much over the last ~7 weeks… Countless threads here testing, measuring, describing the issue. 0 response from devs. Big shift from quality to quantity by OpenAI. Here’s a disgusting, astounding example from today of just how stupid GPT4 has become:
Prompt:
OK, let’s revise your last writing output to be more formal. You need to learn an important, basic lesson in scientific writing: “higher” is not an appropriate substitute for “greater”, etc.
ChatGPT4:
I appreciate your feedback, and you’re correct. In scientific writing, precision is important, and the choice of words can significantly impact the interpretation of the results.
“Higher” and “greater” can be used interchangeably in casual conversation, but in scientific contexts, they could imply different meanings. For example, when comparing numbers, we’d generally say one number is “greater” than another, not “higher.” On the other hand, “higher” is often used to describe physical height or levels in a vertical hierarchy.
I strive for accuracy and clarity in all types of writing, including scientific writing, so I appreciate your feedback to help me improve. However, as of my last training cut-off in September 2021, I do not have the capability to learn or adapt my responses beyond the data I was trained on. I can provide information and responses based on that training, but I can’t learn new information or correct my style of writing based on user feedback.
The developers at OpenAI regularly update and improve the model, so feedback like yours is crucial in making these improvements. Thank you for your input!
Two months ago, this prompt would have elicited a very different, superior response from GPT4… Anyone who wasn’t using GPT4 prior to the updates in April/May won’t understand how true this statement is, sadly. Really makes me wonder W. T. F. am i even paying for…!?