Inconsistent responses between ChatGPT and Playground using GPT-4o

Hello,

I’m encountering differing outputs when using the same prompt with GPT-4o in both ChatGPT’s temporary chat mode and the OpenAI Playground.

Has anyone else experienced this issue? Any suggestions to achieve consistent results across both platforms would be appreciated.

Thank you.

1 Like

What are your parameters? We don’t know what temperature etc. ChatGPT is using, however they do matter a lot when it comes to outputting the same/similar responses.
ChatGPT could also be on a completely different seed or even model version (gpt-4o-chatgpt-private or something along those lines), making the above obsolete as well.

Cheers! :hugs:

Thank you for your reply!

I’ve actually experimented with different combinations of temperature and top-p settings in the Playground, but I still see inconsistencies compared to ChatGPT’s temporary chat mode.

I also noticed that the Playground offers the gpt-4o-latest model, but switching to it didn’t yield more consistent or relevant results. Do you know if this version differs significantly from what ChatGPT is using?

Would love to hear your insights!

Cheers! :blush: