[GPT-3.5-Turbo-16k] Response generation is slower now for Function Calls

Thanks for your response @Foxalabs.

I’ve been trying to quantify the variations I have seen in the last few days. I have tried to run the program throughout the day (4am in the night to 4pm in the afternoon) but I see the same problem. In my observations, the problem I see is with specific requests only.
For requests that just result in text output, the response is just as quick as before. But when LLM response includes a function call request it seems as if its taking unusually long to generate that response. And subsequent calls for text generation after that are still fine.
So at least from my observations it seems like there is something wrong with Function calls. I was not sure if something changed with function calls in the last month.