Logprobs keep changing when using the same prompt in chat.completion

Known “feature” in all chat models. The last models that would produce the same output for an input were the retired GPT-3 series.

Your probability of those is [0.99955572, 0.99999158, 0.99992268], so we can be pretty sure the top token is adequately differentiated from the remaining 0.0005 probability mass of all other tokens :wink:

1 Like