Yeah, it works!

Some working details you need to know … you need to convert the `token_logprobs `

to a linear scale before converting it to a percent.

Example: I have a categorizer from Babbage that is trained to output ’ 0’ or ’ 1’. In this example, say ’ 1’ means the lead is going to buy, and ’ 0’ means the lead won’t buy.

`"choices": [{"text": " 1", "index": 0, "logprobs": {"tokens": [" 1"], "token_logprobs": [-3.170517e-06], "top_logprobs": [{" One": -13.691312, " 1": -3.170517e-06}], "text_offset": [356]}, "finish_reason": "length"}], "usage": {"prompt_tokens": 124, "completion_tokens": 1, "total_tokens": 125}}'`

Then define the base of the natural logarithm:

`e = 2.718281828459`

Then take the log probs out of the JSON response and convert to linear:

```
# get the log probability (base e)
LogProb = FullReaction["choices"][0]["logprobs"]["token_logprobs"][0]
print(f"LogProb: {LogProb}")
# convert the log probability to a probability in [0,1]
Prob = e**LogProb
```

Then multiply the `Prob`

value by 100 to get your percent **from the chosen value**. So if the value in linear from the chosen value is 0.99 for` ' 0'`

, you set the percent to `1-0.99`

or 1%. If it is 0.99 for ’ 1’, you set it to 99%. Slight details, but that is all you need.

To get the `log_probs`

info to show up in the response, I believe you just call it out, and since I am making a binary decision on 1-token, here is the core JSON settings to send over:

`{"temperature": 0, "max_tokens": 1, "top_p": 1, "logprobs": 2, "frequency_penalty": 0, "presence_penalty": 0}`

Not sure why request 2 of them, but I only use the single one from `token_logprobs`

.

Details matter, but this is what you need to do to get it to work.

So in my example, I get

```
LogProb = -3.170517e-06 for token ' 1' = BUY!
Prob = e**LogProb
print(Prob)
# 0.999996829488026
```

SO the lead has a 99.9996829488026 percent chance of buying. WOOT!