Too much difference in Playground response vs API response

Hello,

I’ve been using a prompt in the OpenAI Playground that delivers precise responses in the specified format without any issues. However, when I use the same prompt through the OpenAI API, specifically using the OpenAI Python SDK, I’m encountering a drop in response quality. Additionally, the responses are not conforming to the required format. I have double-checked to ensure that all the hyperparameters are identical in both the Playground and the API.

@SamAltman

Hi there and welcome to the forum. In your API call, do you use the same model parameters as the one in the playground, for example temperature, frequency penalty, presence penalty? Differences in this regard could be one of the drivers for the variances that you see.

zero difference

Press “view code” in the API Playground (without the AI response)

Add print(response.choices[0].message.content) to the end.

top-p: 0.001 in my example ensures near identical response.

Another go this week answering the same:

1 Like