Struggling with the version or code

I am using this code
response = client.chat.completions.create(
model=“gpt-3.5-turbo”, # Use the appropriate model for your needs
messages=[
{“role”: “user”, “content”: prompt}
],
max_tokens=150 # Adjust token limit if necessary
)

I have tried to change my code based on openai versions 0.28.0 and the latest one 1.40.1 and both give the same error.

The above code gives this

An error occurred: ‘ChatCompletionMessage’ object is not subscriptable

The following code

response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",  # Use the appropriate model for your needs
        messages=[
            {"role": "user", "content": prompt}
        ],
        max_tokens=150  # Adjust token limit if necessary
    )

Error
An error occurred:

You tried to access openai.ChatCompletion, but this is no longer supported in openai>=1.0.0 - see the README at

You can run openai migrate to automatically upgrade your codebase to use the 1.0.0 interface.

Alternatively, you can pin your installation to the old version, e.g. pip install openai==0.28

A detailed migration guide is available here:

What do I use I am dead stuck

Hi,

Here is the format for making a call to the API

        from openai import OpenAI
        client = OpenAI()
        openai_api_key = os.getenv("OPENAI_API_KEY")
        if not openai_api_key:
            raise ValueError("OPENAI_API_KEY is not set in the environment variables.")
        
        client.api_key = openai_api_key
        response = client.chat.completions.create(
            model="gpt-3.5-turbo",
            messages=[
                {"role": "system", "content": "You are a helpful assistant"},
                {"role": "user", "content": "Please tell me about yourself"}
            ],
            temperature=0,
        )
        print response.choices[0].message.content.strip()
2 Likes

Again Error
Code:
from openai import OpenAI
client = OpenAI()
openai_api_key = os.getenv(“OPENAI_API_KEY”)
if not openai_api_key:
raise ValueError(“OPENAI_API_KEY is not set in the environment variables.”)
client.api_key = openai_api_key

    response = client.chat.completions.create(
        model="gpt-3.5-turbo",  # Use the appropriate model for your needs
        messages=[
            {"role": "user", "content": prompt}
        ],
        temperature=0 
    )

text = response.choices[0].message[‘content’].strip()

Error:
An error occurred: ‘ChatCompletionMessage’ object is not subscriptable

have you updated the OpenAI library?

pip install openai --upgrade

in a terminal

1 Like

image

You must be linking to another version of the library.

Is your path setup correctly?

I’ve just tested that code and it works fine (i missed the () brackets from the print statement, other than that it runs ok.

Where does that linking happens?

Your development environment will have a path setting somewhere, I don’t know your setup so I can’t give you any more advise than that.
If you’re using windows have a look in the environment variables diaglog box.

could also be set in your vscode env.

But it 100% seems like you are linking to an older version of the library somewhere.

image

No, not your Key, the PATH just below it, see if you have multiple places your OpenAI library could be living.

As in, have you installed the OpenAI lib in more than on dev environment, folder structure or drive? you seem to be getting the latest version back when you run pip, but it seems that is not the case for your runtime env.

Try uninstalling the OpenAI library and see if the code still runs. If it does then you know you have an old version someplace.

‘ChatCompletionMessage’ object is not subscriptable

If you need to check whether the version is the latest, please run conda list to verify the version of the installed OpenAI module.

If the version is up to date, the code should function correctly by using:

text = response.choices[0].message.content
1 Like


in the working directory

I understand that the Python module version is not the issue.
Could you please try replacing it with the example I provided earlier?

Thanks a lot this code worked.

But when I trying to troubleshoot and get desired outcomes, I am using chatgpt for code revisions and suggestions. I am not sure why

.strip is creating issues

Chatgpt has advised .strip at lot of places to remove whitespaces

Extract the response text

    text = response.choices[0].message['content'].strip()

The code I provided, response.choices[0].message.content, might have been confusing.

The intent of this code was not to suggest that using .strip() is problematic.

The issue lies in the use of bracket notation to access the content field.
Due to a mechanism called Pydantic, which ensures type safety, the error “object is not subscriptable” occurs.

With each version update, the internal specifications of the OpenAI Python module have changed slightly. In the current version of the OpenAI Python module, this notation (bracket notation) cannot be used directly, and dot notation must be used instead.

So, as long as bracket notation is avoided, there should be no error with either

response.choices[0].message.content

or

response.choices[0].message.content.strip().

 

Sorry for the confusion.

When I wrote “directly” here, I meant that you can also use bracket notation if you convert the response object to a dictionary.

By converting the response object like this:

res = response.dict()

and then applying bracket notation to the relevant parts, as shown below, you can avoid errors and retrieve the result:

res['choices'][0]['message']['content'].strip()

Additionally, the reason ChatGPT often recommends using “.strip()” to remove whitespace is that whitespace can often cause problems in subsequent processing.

However, whether or not the presence of whitespace is actually a problem depends on the specific requirements of your task.

So, instead of taking ChatGPT’s suggestions at face value, please make sure to apply them according to your own specific needs :slightly_smiling_face: