GPT (gpt4) API works much slower than Playground

Hello, I have a question.

I’m testing GPT API with OpenAI Playground.
I use “chat” mode with “gpt4” model, asking some questions.
It works just like chatGPT, the response time is quick (usually a few seconds before it starts typing).

However, when I call GPT API from my local environment with the same setup (“chat” mode with “gpt4” model) to ask the same question which I did in Playground, the response is very slow (about 5~10 times more than Playground).

Why does this happen, and are there any solutions for it?

The source code is below. It’s copied from Playground.
I tried setting stream param to true, but it does not help.

const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
const openai = new OpenAIApi(configuration);

const response = await openai.createChatCompletion({
  model: "gpt-4",
  messages: [
      "role": "user",
      "content": "Create a comparison table for Toyota, Tesla, and Honda in markdown format."
  temperature: 1,
  max_tokens: 256,
  top_p: 1,
  frequency_penalty: 0,
  presence_penalty: 0,
 // stream: true



In the playground you are using the super fast GPT-3.5-Turbo model, so it’s quick, and then in your code version you are calling the slower and more powerful GPT-4 model, you would expect there to be a significant difference between the two.