GPT 3.5 models became useless, behave very differently than same models in playground

Recently GPT models started to behave very differently through the api. We create our prompts in playground and then use them in API.

Using the same prompt in playgound works perfectly, in API the results are much worse.

We seem to move backwards with these models from OpenAI. Dissapointing.

Press the playground button “view code” to ensure every single API setting and message role is reproduced exactly when you are using the API.

The playground is the exact same access to the same models (and you get billed too).