Different behavior in batches

Hi,
I can’t provide specific examples, but the issue is that the behavior of the model (gpt-4o-mini) in batch mode is DIFFERENT from the same request made in sequential mode.
Is this a known issue? Empirically, the responses are quite different, but it’s hard to verify this reliably.

1 Like

You can do an experiment, and try setting the temperature to 0, then you should get very close responses. If you don’t, your hypothesis is validated, otherwise, same model.

2 Likes