Help: Playground Responses Not the same as Chat GPT

Hi. I just recently purchased the playground subscription for my content writing and other SEO workflows, but whenever I try to provide prompt it comes up with very robotic content, although the same prompt returns better results in the free version of chat GPT.

I have tried different temperature, and parameters settings. I even added a system message and tweaked my prompts several times, but the results still aren’t what I expect.

Models I have tried: GPT 4, 4o-mini, 0-1 mini, GPT-4-1106 preview

How can I fix or improve this? I prefer using playground over chat GPT plus for more flexibility of using different models

I have spent more than 18 hours trying to figure out, but no luck. Any help would be highly appreciated.

Thanks

In the playground you are using the “bare” model, influencing it’s behaviour by your prompts and settings (temperature, etc). ChatGPT is not the bare model, but an application that is using the model. It has it’s own (probably very sophisticated) system prompts and framework around it highly trimmed for user experience. While in theory it should be possible to create a framework (system prompts, settings, tools, etc), I doubt you will get the same or similar results so easily in the playground when demanding more complex tasks.

3 Likes

Thanks for the response. But I usually see the apps built around APIs tend to get quiet good result.

I have tweaked my prompts further and have managed to somehow get better results for the time being.

Hi @syedali.wba and welcome to the community!

Can you share an example of your prompt and the content you are working with?