When I send a request via batch API with GPT-4o-mini, it is extremely slower than GPT-4o. Is it correct? For a batch with 10 items, GPT-4o can finish it with 2mins. However, the reaction of GPT-4o-mini is rather slower, with no results even more than 30 mins (Time of this post is 13:47, SG Time)
Welcome to the Forum!
By design, the turnaround for a batch is 24h. Of course, often the batch is completed much faster but that depends on a lot of factors, many of which are not transparent to us.
Therefore, I find it difficult to make comparisons across different models for the batch completion time.
Perhaps a large scale test could indicate certain patterns but not sure if anyone has completed such a test.
2 Likes