Open AI API response slower in the evenings? (gpt-3.5-turbo)

Hello everyone, we’ve been experiencing API slower response times every evening since December 5th. Any one else experiencing something similar?

Looking forward to your feedback :grinning:

Welcome to the forum, Sebastien!

Is that a feeling or you actually monitor it?

I don’t have such feeling, but that gave me an idea to regularly measure time to have stats :slight_smile:

Did the first run (31 completions each): prompt_engineering_experiments/experiments/OpenAILLMsSpeedMeasurements/OpenAI LLMs Speed Measurements (report).ipynb at main · TonySimonovsky/prompt_engineering_experiments · GitHub

Interestingly, 3.5-turbo is wwway slower than 3.5-turbo-1106.

1 Like

Thx Tony,

We mesure everything. I keep feelings for my wife and family :smile:

We have an average response time of around 3 seconds during the day and 12 seconds in the evening.

2 Likes

Here is a resource for you to be able to answer questions like this,

3 Likes

Hey Tony, so far so good! 3.5-turbo-1106 is much faster indeed and apparently more accurate. Thank you for your help :pray:

1 Like

Glad I helped, Sebastian!

I’m personally only using 1106 since its release, so many community members were not happy without h its results, for all of my cases so far I find it great.

One thing though: if you expect json from the model, I strongly recommend breaking the solution into 2 parts: one actually responding and another only focusing on json (like statuses, etc).

JSON takes too much attention of the model and it starts to follow other instructions pretty badly.

Some preliminary experiment results: prompt_engineering_experiments/experiments/OpenAIAttentionGrab/OpenAI Attention Grab (report).ipynb at main · TonySimonovsky/prompt_engineering_experiments · GitHub