Error: You tried to access openai.ChatCompletion, but this is no longer supported in openai>=1.0.0

Hello,
I was working on a project and started getting the error "You tried to access openai.ChatCompletion, but this is no longer supported in openai>=1.0.0

I have openai v1.54.4 which I believe is the newer version., but downgrading to 0.27.0 seems to work, but do I want to continue my project using an outdated version? I don’t think so…
below is what I used to catch the error.

import openai
import os
from dotenv import load_dotenv

load_dotenv()

openai.api_key = os.getenv("OPENAI_API_KEY")

try:
    response = openai.ChatCompletion.create(
        model="gpt-4",
        messages=[{"role": "user", "content": "Say this is a test"}]
    )
    print(response['choices'][0]['message']['content'])
except Exception as e:
    print("Error:", e)

You have found a code example that is obsolete.

Chat away with this example instead:

""" Python chatbot example with chat history. Python 3.8-3.11, OpenAI 1.1+ - _j """
from openai import OpenAI

client = OpenAI()

# Persistent system instruction gives the AI its job (triple-quotes allows multi-line input)
system = [{"role": "system", "content":"""
You are BonitaBot, a helpful expert AI assistant.
- Produce step-by-step reasoning when answering, in a manner always fulfilling the user input
""".strip()}]

# A list to contain past user/assistant exchanges
chat_hist = []

# An initial user input - immediately sent to test your connection
user = [{"role": "user", "content":
  "introduce yourself"}]

# A template for the API request, everything but the messages
api_parameters = {  # we use dictionary format for modifiability and for ** unpacking
  "model": "gpt-4o-mini",  # We use the cheaper chat model here instead of gpt-4o
  "max_tokens": 2000,  # set token reservation/maximum length for response
  "top_p": 0.5,        # sampling parameter, less than 1 reduces the 'tail' of poor tokens (0-1)
  "temperature": 0.5,  # sampling parameter, less than 1 favors higher-certainty token choices (0-2)
  "user": "myuserid",  # track your users to openai, some risk mitigation
  "stream": True,      # streaming needs iteration over a generator return object
  # more parameters like tools, response format, ...: see API reference
}

# Now we enter a loop
while not user[0]['content'] == "exit":

    # add messages: system message, a 10 turn limited-length history, and the latest input
    api_parameters.update({"messages": system + chat_hist[-10*2:] + user})  #concatenate lists

    # Now we send off our request to the API (using dictionary unpacking to parameter)
    try:
        response = client.chat.completions.create(**api_parameters)

    # ...and catch any errors and abort if they happen (you can be smarter)
    except Exception as err:
        print(f"Unexpected {err=}, {type(err)=}")
        raise

    reply = ""  # this is going to contain the assembled response from AI

    # Chunks of the SSE stream subscription are received and printed quickly
    # the response is Pydantic model, giving methods like .delta.content
    for part in response:
        word = part.choices[0].delta.content or ""
        reply += word
        print(word, end ="")

    # Finally, add both the user's message and the AI to the chat history
    chat_hist += user + [{"role": "assistant", "content": reply}]

    # Now it's time for the user to put in their own input, and we go again!
    user = [{"role": "user", "content": input("\nPrompt: ")}]

This uses an environment variable OPENAI_API_KEY, where you have set your funded API key

1 Like