Clarifications on Log Probabilities for Chat Completion

Firstly, are the log probabilities natural logs or in base 10? Could not seem to confirm this in the documentation.

Secondly, curious about the design - why does the endpoint return the log of the probability, and not the probability itself?

  1. Because the values of less-likely tokens can be very small, returning the exponents like -9.3 is possibly seen as better than .00000003, and they are in that form for temperature scaling and other unknowns.

Answer 1 and a bit of code for adding normal probabilities to the return object or stream, that I haven’t yet adapted to new API, but probably would work on a response.model_dump() of the return object.

1 Like