I am fairly sure that the batch API calls exactly the same models, just in their “free time.” However I was experiencing the same issue and the problem was that I was writing my requests to a JSONB table to be picked up by some job and submitted to openai. JSONB sorts the keys in the json, so my function was re-ordered and order matters a lot for performance. Changing the data type to JSON in my table fixed the issue.