Logprob value is unbounded, i used sigmoid (converting API logarithm to probabilities)

to bound it between 0 and 1 as below. Is this correct approach to do that. I see the token probability is 0.4 which is very less for the correct tokens

import os

import openai
import math

from dotenv import load_dotenv

OPENAI_API_KEY = os.environ.get('OPENAI_API_KEY')
# Initialize the OpenAI API client with your API key
openai.api_key = OPENAI_API_KEY

def sigmoid(x):
    """Sigmoid function to map any value to a range between 0 and 1."""
    return 1 / (1 + math.exp(-x))

def get_logprobs(prompt, model="text-davinci-002"):
    Get log probabilities of tokens generated by OpenAI for a given prompt.

    - prompt (str): The input text to generate a response.
    - model (str): The model to use for generation. Default is "text-davinci-002".

    - list: A list of token log probabilities mapped between 0 and 1.
    response = openai.Completion.create(
        max_tokens=25,  # You can adjust this value as needed
        logprobs=2  # Number of log probabilities to return for each token

    # Extract log probabilities from the response
    raw_logprobs = response.choices[0].logprobs.top_logprobs

    # Apply sigmoid function to map logprobs between 0 and 1
    # transformed_logprobs = [sigmoid(value[1]) for value in raw_logprobs[0]]  # Assuming the logprob is the second item in each tuple

    return raw_logprobs

# Example usage
prompt_text = "What is the capital of paris"
logprobs = get_logprobs(prompt_text)
for lp in logprobs:
    for key,value in lp.items():
1 Like

The logprobs are simply that, probabilities, represented logarithmically (base e).

I think what you wish for is if the API returned the probabilities as they are shown in the playground, and then also gave the total probability coverage each token. We can do that.

some imports

import os
import openai
import json
import math
import copy

A function to accept a completion return, and if it has logprobs, add a new item with the same logprob structure, but with 0-1 probabilities instead of exponents.

def add_probabilities(api_object):
    if not api_object['choices'][0]['logprobs']:
        return api_object
    # Create a deep copy of the api_object so we don't modify the original
    api_object_copy = copy.deepcopy(api_object)
    # Iterate over the choices
    for choice in api_object_copy['choices']:
        # Initialize the new "probabilities" dictionary
        probabilities = {}
        # Iterate over the "logprobs" dictionary
        for key, value in choice['logprobs'].items():
            # If the key is "tokens" or "text_offset", copy the value as is
            if key in ["tokens", "text_offset"]:
                probabilities[key] = value
            # If the key is "token_logprobs", convert the logprobs to probabilities
            elif key == "token_logprobs":
                probabilities[key] = [math.exp(logprob) for logprob in value]
            # If the key is "top_logprobs", iterate over the list of dictionaries and convert the logprobs to probabilities
            elif key == "top_logprobs":
                probabilities[key] = [{token: math.exp(logprob) for token, logprob in dict_item.items()} for dict_item in value]
        # Calculate total probabilities for each dictionary in "top_logprobs"
        probabilities["total_logprobs"] = [sum(dict_item.values()) for dict_item in probabilities["top_logprobs"]]
        # Add the "probabilities" dictionary to the choice
        choice['probabilities'] = probabilities
    return api_object_copy

a little API function, where you could handle all the errors possible

def do_api(api_prompt):
    api_params = {
    "model": "gpt-3.5-turbo-instruct",
    "max_tokens": 2,
    "top_p": 1e-9,
    "prompt": api_prompt,
    # "logprobs": 5,
    "n": 1,
        api_response = openai.Completion.create(**api_params)
        return api_response
    except Exception as err:
        print(f"API Error: {str(err)}")

And then our own little playground

openai.api_key = os.getenv("OPENAI_API_KEY")
prompt="User: Say 'hello'.\nAI:"

print("##>", end="")
api_get = do_api(prompt)
api_out = add_probabilities(api_get)
api_string = json.dumps(api_out, indent=2)
#api_string = api_string.replace("\\n", "\n")
api_string = api_string.replace('\\"', '"')

OUTPUT: what we get for a new “probabilities”, includes:

        "top_logprobs": [
            " Hello": 0.5773516400136589,
            " Hi": 0.19021177923596874,
            "Hello": 0.067644583100199,
            "\n\n": 0.06604950779814563,
            "Hi": 0.025466052201047033
            "!": 0.5264271466063052,
            ".": 0.2591786724769263,
            ",": 0.09482852674076715,
            " there": 0.04379592072803984,
            "!\n": 0.024123977887792435
        "text_offset": [
        "total_logprobs": [

Values for the second token you can see match the playground:


1 Like

Above, remove the comment hash from the logprobs line to actually get the 5 logprobs instead of a standard completion response.

Then, maybe you’ve enabled streaming? Instead, a little loop to pull the streaming deltas out of the generator object of api_get

openai.api_key = os.getenv("OPENAI_API_KEY")
prompt="User: Say 'hello'.\nAI:"

print("##>", end="")
api_get = do_api(prompt)
for chunk in api_get:
    chunk_out = add_probabilities(chunk)

I’ll leave it to you to figure out the display of multibit characters like emoji when you make each chunk its own UI object in your chat history display (that you can similarly hover or click).

The top_logprobs, does it always return the top 5 tokens for each token generation. is it possible to increase more than 5. i am trying to play around this to see if there is a correlation between hallucination and these probabilities

Maximum of five. You put in a larger number for the parameter, and still only get five.

You can see in the screenshot that a higher number of returns used to be available.

If the generated text is not in the top-5, technically, you get 6.

This is an awesome thread. Thank you @joyasree78 and @_j