API GPT-4 model Response Slowly

Nearly 8/3 14:00(GMT+8) When I use the Chat Completion API gpt-4-0613 model to generate content, about 1679 tokens (respectively completion_tokens: 539, prompt_tokens: 1140), took me about 1 minute 13 seconds, but before this time the same content can be completed within 10 seconds, I tried changing the model to gpt-3.5-turbo-16k-0613, and it took only 12 seconds to complete. Does anyone have the same problem? Could it be because of the gpt-4-0613 model?

Welcome to the developer forum!

At certain times of day and when lots of new users are coming on line as is happening with the GPT-4 API rollout and when there are server issues the times can fluctuate like this. I find this page useful to check if it’s a one off on my end or a system issue:

2 Likes

@Foxalabs Thank you for your assistance, I think this dashboard has solved my problem