Tips for Reducing OpenAI Code Interpreter Latency?

Hi everyone,

I’ve been working with the Code Interpreter via the Python API, and am experiencing noticeable latency. For example when I try a dulled-down version of the bot I solely request it to filter a DataFrame and return the result in the first row - this takes ~20 seconds from prompt to result even though the code itself and the result are super short!

Does anyone have any tricks or settings adjustments that I could try on my end to speed things up a bit? Any advice to minimize this delay would be greatly appreciated.

I did come up with loading the CSV and converting it to a data frame in one calculation cell, and filtering in another to make the experienced latency feel slow for the end users.

I’m adding an annotated screenshot of a mock conversation to get the message through. In it I input a list of the first 20 Fibonacci numbers in a shuffled order and ask for the first prime in it (result is correct).

Based on my experience, 20 seconds is quite reasonable or at least expected.