Why is Api Slower Than Playground?


I am trying the api and playground with the same purpose and settings. I am using davinci to chat. But even in longer prompts playground returning within a second but api is returning nearly 4-5 seconds.

Is there a setting or way to improve api performance?
I am using free credits now. If I upgraded, does api performance increase?


I have the same observation. I now have a paid account but I have not seen speed improvements.

I guess we could try and look under the hood of the playground call from the browser to see if they have a -go_fast switch.

So far I have just put it down to a cynical move to lull buyers into a false sense of quality. But maybe I am being cynical and OpenAI does not, on first impressions, appear to run themselves that way.

That’s true Paul, playground is way faster than the APi, well it could also be due to the various touchdown from the server, backend and front end delivering the result.

Can you also help me out, we want to use the “chat” preset of the Davinci model as an API on our front end.

I am not sure what you mean by the “chat” preset. My understanding is that Davinci is like an auto-complete engine. It kind of works like ChatGPT in so much as you get similar answers (although not as comprehensive) but it retains no memory of the conversations; unlike ChatGPT. So each call is like a brand-new conversation. Unless there is something I have missed?


Oh! Thanks Paul

If you notice the OpenAi playground, you can select the “chat” preset and you have a very smooth and intelligent conversation, buh it isn’t the same for the API after implementation, the conversation are shabby and not making entire sense. Its all over the place

I had a look at the “chat” preset and I see what you mean. I tried the API myself and could not get it to remember anything between calls.

Looking at the code in the playground the only thing that stood out was the presence_penalty set at 0.6.

Matching their inputs made no difference through the API for me.

1 Like

Thanks @paul.armstrong for heads up regarding paid plan.

However, I got to know about chatGPT plus. I know it’s a different thing from API upgrade. But, by any chance, is there a possibility of speed improvement in api call if we get chatGPT plus?