However, if one tries to get the chatbot response using:
response['choices'][0]['message']['content'] as describes here OpenAI Platform

One gets: TypeError: ‘ChatCompletion’ object is not subscriptable

I would like to mention, that this should be fixed din the documentation to:

content = response.choices[0].message.content

3 Likes

It works in in Google Colab. Try:

from openai import OpenAI

client = OpenAI(
api_key=“Your API KEY”,
)

chat_completion = client.chat.completions.create(
messages=[
{
“role”: “user”,
“system”: “xxxxx”,
“content”:“xxxxxx”
}
],
model=“gpt-XXXX”,
max_tokens= XXXX,
)

print(chat_completion.choices[0].message.content)

1 Like

Thanks it work for me pip install openai==0.27.8

1 Like


Guys could someone help me with this?

I’m getting this now! AttributeError: module ‘openai’ has no attribute ‘Completion’

I fixed this issue by uninstalling OpenAI:

pip3 uninstall openai

Then reinstalling it:

pip3 install openai

Getting this error just today, worked yesterday without issues.

1 Like

Came to this issue on Google Colab. The following works:

!pip3 install openai

from openai import OpenAI
from google.colab import userdata

client = OpenAI(
    api_key=userdata.get('OPENAI_API_KEY'),
)
def llm_response(prompt):
    response = client.chat.completions.create(
        model='gpt-3.5-turbo',
        messages=[{'role':'user','content':prompt}],
        temperature=0
    )
    return response.choices[0].message.content

prompt = '''
    Classify the following review 
    as having either a positive or
    negative sentiment:

    The banana pudding was really tasty!
'''

response = llm_response(prompt)
print(response)

Perhaps when posting in this thread someone could spend thirty seconds of reading, install “openai classic”, and press the thanks button for the answer above…

pip install "openai<1.0.0"

Or alternately code for the new methods of the API library changes.


If OpenAI had given anyone a heads up instead of jumping from 1.0.0beta2 all the way to 1.1.1 internal and dumping wheels on those millions of developers, maybe a generous person could have written and put in a pull request for another cookbook notebook to be put up “how this all works without paying for a 3rd party code-conversion service, all the way from simple calls up to asyncio streaming multimodal multi-client with token-counting chat history client management

5 Likes

I had the same problem, while using skll library

The only solution was to install openai version 0.28.1

pip install openai==0.28.1

1 Like

thanks…this works for me…i m using ubuntu on wsl2 and vscode

The error you’re encountering indicates that ChatCompletion is not a subscriptable object, meaning you can’t use indexing ([]) directly on it. It seems like the response object is not a dictionary, but an instance of a ChatCompletion class.

When dealing with a class instance, you would typically access its attributes using dot notation. If you are using the OpenAI Python client, the attributes of the ChatCompletion object would be accessed accordingly. However, the output structure seems to suggest that it should be possible to subscript it if it were a dictionary.

Here is how I fixed this:

# Assuming 'response' is an instance of a ChatCompletion or similar class 

message_content = response.choices[0].message.content 

print(message_content)

Using the period instead of the bracket worked.

1 Like

As marciobernardo1 has mentioned it above, try to use openai version 0.28.1 as the ChatCompletion attribute is still there and your code should be working as expected.

Hello guys.

I’m having a similar problem but I’m not using it that way, I’m using it with discord to create a bot but it’s not working

import os
import discord
import openai
from discord.ext import commands

intents = discord.Intents.default()
intents.message_content = True
intents.members = True
prefix = "!"
bot = commands.Bot(command_prefix=prefix, intents=intents)

openai.api_key = os.environ.get("key_chatgpt")

# OpenAI
class ChatBotCog(commands.Cog):
    def __init__(self, bot):
        self.bot = bot

    @commands.command()
    async def ask(self,ctx, *, question):
        try:
            conversation = [
                {"role": "system", "content": "MEU NOME E Role Aleatorio, E AGORA TAMBÉM SOU UM CHAT-BOT"},
                {"role": "user", "content": question}
            ]   

            response = openai.Completion.create(
                engine="gpt-3.5-turbo",
                messages=conversation,
                max_tokens=1024
            )

            await ctx.send(response.completion.choices[0].message.content)
        except Exception as error:
            await ctx.send(f"Ocorreu um erro: {str(error)}")
  
async def setup(bot):
    await bot.add_cog(ChatBotCog(bot))

this is the code and this is the problem


can you help me with this.

Follow your code , return an error:
APITimeoutError: Request timed out.
why is this

That has multiple bot-written problems. You are calling a completion endpoint with a chat model, and using the deprecated “engine” parameter. You are trying to extract a chat response from a completion reply. It is written for the old library.

It is offtopic from those that simply experience incompatibility when updating the openai library.

That’s because of the volume of requests hitting the API

WHERE is this openai.py file that everyone is talking about?

“openai” is a python library. There’s a quickstart you could have read, but let’s jump in.

If you have Python 3.8-3.11 installed on your system for compatibility, you can, at your command line or shell:

pip install --upgrade openai

to install the latest version of the openai python library (wheel) and its dependencies.

You then can run Python scripts, applications, or more advanced uses with the new v1.1 client object programming style introduced November 6:

from openai import OpenAI
client = OpenAI()

system = [{"role": "system", "content":
           "You are Jbot, a helpful AI assistant."}]
user = [{"role": "user", "content":
         "introduce Jbot"}]
chat = []

while not user[0]['content'] == "exit":
    try:
        response = client.chat.completions.create(
            messages=system + chat[-20:] + user,
            model="gpt-3.5-turbo",
            max_tokens=1000, top_p=0.9,
            stream = True,
            )
    except Exception as err:
        print(f"Unexpected {err=}, {type(err)=}")
        raise
    reply = ""
    for part in response:
        word = part.choices[0].delta.content or ""
        reply += word
        print(word, end ="")
    chat += user + [{"role": "assistant", "content": reply}]
    user = [{"role": "user", "content": input("\nPrompt: ")}]

Don’t save your programs as openai.py, as that will break everything. Save this as streaming_chatbot.py.

You must then finally obtain an API key from your (funded) API account, and set the key value as a OS environment variable OPENAI_API_KEY.

2 Likes

“openai” is a python library. There’s a quickstart you could have read, but let’s jump in.

LOL. The quickstart is super simple too. You explained it well though.