Different results from openai and playground

we experience the same with fine-tuned models. completions differ massively between playground and api while api being way worse.

1 Like

We experience the same problem with gpt-4 model.
Maybe playground is adding some information to request that we don’t agregate in request.

I’m having the same issue, I’m using the prompt ChatGPT suggests. I’m trying to generate a conversation between 4 people. Using the playground the results are great. Using the API and the results are much worse if it even works on some requests.

1 Like

Same here. I am using DaVinci3 for autoformalization (translating sentences into formal logic), my prompt works well in the playground, but via the API, it frequently returns completely wrong answers, thus making the intended application unusable.