GPT4 is consistently giving identical answers to the code I provided to it, even though I’m explicitly telling it what is wrong. There is something seriously wrong with the service at the moment. This could be related to the new models or updated training data, I’m not sure. The whole platform is painfully opaque. I’m only complaining now because it seems fundamentally incapable of working in the context I’ve been using it for so long. At this point, it’s faster just to write the code myself. I was not expecting it to get worse over time.
GPT3.5 is performing better than GPT4 at coding tasks right now. Something must be wrong.
A good solution for now is to use the deprecated models (0314 and 0618) via the Playground/API. Please fix this OpenAI!