Too long response time on API gpt-3.5-turbo model

Without access to what’s happening behind OpenAI’s API, a thought:

Did you try the exact same input on both? Input and output token count affects the response time, so that may be a cause of differences.

You can count tokens without calling the API: see this help page.