openai.Completion.create API performance

I am using openai.Completion.create(model="text-davinci-003", prompt=prompt_text, max_tokens=2000) API for a prompt completion and it take about a minute or so to complete. Same prompt completes much faster with chatGPT. I cannot stream since I need the data back in whole. Anyone has any pointer on dealing with this issue ? Latency is more important than throughput for my use case.