Hello, I have a question.
I’m testing GPT API with OpenAI Playground.
I use “chat” mode with “gpt4” model, asking some questions.
It works just like chatGPT, the response time is quick (usually a few seconds before it starts typing).
However, when I call GPT API from my local environment with the same setup (“chat” mode with “gpt4” model) to ask the same question which I did in Playground, the response is very slow (about 5~10 times more than Playground).
Why does this happen, and are there any solutions for it?
The source code is below. It’s copied from Playground.
I tried setting stream
param to true, but it does not help.
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
const response = await openai.createChatCompletion({
model: "gpt-4",
messages: [
{
"role": "user",
"content": "Create a comparison table for Toyota, Tesla, and Honda in markdown format."
}
],
temperature: 1,
max_tokens: 256,
top_p: 1,
frequency_penalty: 0,
presence_penalty: 0,
// stream: true
});
Thanks,