Issue with Accessing 'choices' Attribute from OpenAI API Response

Hello everyone,

I’m currently working on a project where I’m using the OpenAI API to generate responses based on user input. I’m using the openai.ChatCompletion.create method to send messages to the API and receive a response.

The issue I’m encountering is when I try to access the ‘choices’ attribute from the response object. Here’s the relevant part of my code:

response = openai.ChatCompletion.create(
    model="gpt-3.5-turbo",
    messages=[
        {"role": "system", "content": system_message},
        {"role": "user", "content": user_message}
    ]
)

print(response)
analysis = response.choices[0].message.content.strip()

When I run this code, I get an error message saying “member choice is unknown”. However, when I print the response object, I can see that it includes a ‘choices’ field.

Here are the solutions I’ve tried so far:

  1. Accessing the ‘choices’ field with dictionary-like indexing (response[‘choices’]). This didn’t work because the response object is not a dictionary.

  2. Checking the version of the OpenAI Python client library I’m using. I’m using the latest version, so this doesn’t seem to be the issue.

  3. Using the .choices attribute to access the ‘choices’ field (response.choices). This is the method that’s causing the “member choice is unknown” error.

I’m not sure what else to try at this point. I would appreciate any suggestions on how to resolve this issue. Specifically, I’m looking for a way to access the ‘choices’ field from the response object without encountering the “member choice is unknown” error.

Thank you in advance for your help!

Welcome to the community @Samso

This error indicates that in your original source code, somewhere you’re using choice instead of choices, so you need to fix that.

If the error still persists, you can use print(dir(response)) to inspect the response.

I mistyped that the error says choices. ‘Member “choices” is unknown.’ Thank you for responding so quickly, really appreciate it. Thanks for the welcome too, glad to be here!

what does this show you ?

# Serialize the object to a JSON-formatted string
json_string = json.dumps(response, indent=4)  # 'indent' adds pretty-printing for readability

# Print the JSON-formatted string
print(json_string)

Rather, can you post the entire output for us to debug?
I don’t recall seeing “member choice is unknown” error before.

response should be an OpenAIObject. json.loads is not going to parse it if it works correctly.

PS C:\Users\graci\training.py> python api.py
{
  "id": "chatcmpl-7x45pcRxbcihgcEH71KFcHwh8fBp7",
  "object": "chat.completion",
  "created": 1694311441,
  "model": "gpt-3.5-turbo-0613",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Based on the information provided, the user has two derogatory marks on their credit report:\n\n1. Mark1 with description1: Unf

Problem/Error:
Cannot access member “choices” for type “Generator[Unknown | list[Unknown] | dict[Unknown, Unknown], None, None]”
Member “choices” is unknown

1 Like

This is a valid chat.completion response object provided that you have truncated the rest of it after the content. That’s what I get and works perfectly fine for me.

What version of python and openai module is that code running on?

Also please rename the file to a custom name rather than using names like api or openai etc.

UPDATE & FIX:
For python, according to docs, you should access the content using:

response['choices'][0]['message']['content']
3 Likes

Let’s take some example input (example of how gpt-3.5-turbo is on edge of not understanding…)

(expand) ChatCompletion response n=6
output_object =
{
 "id": "chatcmpl-7xCLl4yD4nAqrBDXv2FUe6IBmIzKs",
 "object": "chat.completion",
 "created": 1694343181,
 "model": "gpt-3.5-turbo-0613",
 "choices": [
  {
   "index": 0,
   "message": {
    "role": "assistant",
    "content": "{\"joke_question\": \"Why don't scientists trust atoms?\", \"joke_answer\": \"0. Because they make up everything!\"}"
   },
   "finish_reason": "stop"
  },
  {
   "index": 1,
   "message": {
    "role": "assistant",
    "content": "{\"joke_question\": \"Why don\u2019t scientists trust atoms?\", \"joke_answer\": \"1. Because they make up everything!\"}"
   },
   "finish_reason": "stop"
  },
  {
   "index": 2,
   "message": {
    "role": "assistant",
    "content": "{\"joke_question\": \"(AI random joke)\", \"joke_answer\": \"2. (punchline)\"}"
   },
   "finish_reason": "stop"
  },
  {
   "index": 3,
   "message": {
    "role": "assistant",
    "content": "{\"joke_question\": \"(AI random joke)\", \"joke_answer\": \"3. I'm reading a book about anti-gravity. It's impossible to put down!\"}"
   },
   "finish_reason": "stop"
  },
  {
   "index": 4,
   "message": {
    "role": "assistant",
    "content": "{\"joke_question\": \"Why don't scientists trust atoms?\", \"joke_answer\": \"4. Because they make up everything!\"}"
   },
   "finish_reason": "stop"
  },
  {
   "index": 5,
   "message": {
    "role": "assistant",
    "content": "{\"joke_question\": \"Why don't scientists trust atoms?\", \"joke_answer\": \"5. Because they make up everything!\"}"
   },
   "finish_reason": "stop"
  }
 ],
 "usage": {
  "prompt_tokens": 40,
  "completion_tokens": 159,
  "total_tokens": 199
 }
}

So we just put that in our example code as “response” - or you can start with the response of an actual chat completion call (although note that response can be a generator without completed response)

Then some code to extract one AI response, and some code to demonstrate picking, thanks to AI specifications about as long as the output:

def pick_choice(response, index):
    choices = response.get("choices", [])
    
    if 0 <= index < len(choices):
        return choices[index]["message"]["content"]
    else:
        return "Error: Index does not exist in choices."

# Demonstration of the function
while True:
    try:
        choice_index = input("Enter choice index (0-5) or press Enter to exit: ")
        if not choice_index:
            break
        choice_index = int(choice_index)
        content = pick_choice(response, choice_index)
        print(content)
    except ValueError:
        print("Error: Invalid input. Please enter a valid integer index.")
    except IndexError as e:
        print(str(e))

(handle errors in a way that actually works for you)

1 Like

Python Version: 3.11.1
OpenAI Version: 0.28.0

I changed the file name and I’ve formatted the response in the style the documentation provided and it’s still returning an error

Error/Problem:
getitem” method not defined on type "Generator[Unknown | list[Unknown] | dict[Unknown, Unknown], None, None]"P

The environment seems fine.
Can we have the entire code instead of a snippet if possible? I have not seen python output this kind of error before. Normally the error should be KeyError if you are trying to assess a value in a dictionary.

def analyze_derogatory_marks(derog_counter):
    if isinstance(derog_counter, list):
        derog_counter = ', '.join(derog_counter)
    elif isinstance(derog_counter, dict):
        derog_counter = ', '.join(f"{k}: {v}" for k, v in derog_counter.items())

    system_message = "You are a helpful assistant that analyzes derogatory marks on a credit report."
    user_message = f"The user has the following derogatory marks on their credit report: {derog_counter}. " \
                   f"Please provide a summary of these marks, their potential impact on the user's credit score, " \
                   f"and suggestions on how to improve the credit score."

    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=[
            {"role": "system", "content": system_message},
            {"role": "user", "content": user_message}
        ]
    )

    analysis = response['choices'][0]['message']['content']
    return analysis

Well, print(analysis) before the return. I have a feeling this is not where the error is.

Let’s make a new function that isolates the generator of openai and returns indexable bare responses.

import openai
openai.api_key = "sk-12345" # your key (or environment)

def api_playground(msg,model="gpt-3.5-turbo",max_tokens=100,temperature=0.5,top_p=0.99,n=1):
    api_params = {"messages": msg,"model": model,"max_tokens": max_tokens,
                  "temperature": temperature,"top_p": top_p,"n": n}
    try:
        response = openai.ChatCompletion.create(**api_params)
        new_dictionary = {}
        for choice in response["choices"]:
            index = choice["index"]
            content = choice["message"]["content"]
            new_dictionary[index] = content
        return new_dictionary
    # many more openai-specific error handlers can go here
    except Exception as err:
        error_message = f"API Error: {str(err)}"
        return error_message

messages=[
    {
    "role": "system",
    "name": "joke_of_the_day_generator",
    "content": 'Output format of short joke: {"joke_question": "(AI random joke)", "joke_answer": "(punchline)"}'
    },
]
api_out = api_playground(messages, n=2, temperature=2) # returns dictionary of choices
print('--API Playground - outputs by index demo --')
print(api_out[0])
print(api_out[1])
print('---------------')

output (python 3.8.16):

--API Playground - outputs by index demo --
{"joke_question": "(Why don't skeletons fight each other?)", "joke_answer": "(They don't have the guts!)"}
{"joke_question": "Why don't scientists trust atoms?", "joke_answer": "Because they make up everything!"}
---------------

If that doesn’t work for you, you take your python > 3.9 and bit bucket it.

Why is the JSON content chopped off? Basically if that object looks good, but you still cannot access that property you know is there, you likely have some kind of corrupt environment/runtime, or even a memory/hardware issue. I’d uninstall Python completely and reinstall it if that’s the case.

1 Like

Yeah. I think that’s a good idea. I’ll get that done and let everyone know if it resolves the issue or not.

One more thing to try first. Get an example from somewhere and paste it into a new file to run. Sometimes there can be a character corruption, like a wacky hidden character in source files or a typo that just continually goes unnoticed, and running an totally different source file might prove that.

1 Like

note: if installing Python “for all users”, that is installing in a system/administrator account. (my preferred way)

You must then also do any pip install as administrator. cmd.exe->“run as administrator

1 Like

I’m on 3.11.2 and the latest openai module

and this works just like this

You can try specifying the type for response as dict before you do a complete re-install:

response: dict = openai.ChatCompletion.create(...)

Also what IDE/Editor are you using @Samso ?

VSCode. Weird enough, I just uninstalled and reinstalled everything including VSCode, and the error cleared.

1 Like

I guess we all neglected to mention using a ‘debugger’. :beetle: Glad you got it fixed. Sometimes stuff just goes screwball. I’ve gotten where if VSCode shows me a syntax error and I can’t find it in 10 seconds I restart VSCode and most of the time that fixes it. haha.

1 Like