openai.ChatCompletion error using version 1.0.0 of OpenAI Library

I am now getting an error that flags my Get_completion and openai.chatcompletion code lines stating, "You tried to access openai.ChatCompletion, but this is no longer supported in openai>=1.0.0

I queried ChatGPT4 for the correct syntax for this error and tried it, but still get the error below (in both GPT 3.5 and GPT 4)

Case Example

def get_completion(prompt, model=“gpt-3.5-turbo”):
messages = [{“role”: “user”, “content”: prompt}]
response = openai.ChatCompletion.create(
model=model,
messages=messages,
temperature=0, # this is the degree of randomness of the model’s output
)
return response.choices[0].message[“content”]

No error generated running code snippet above

My prompt to run
prompt = f"“”
Generate a list of three made-up book titles along
with their authors and genres.
Provide them in JSON format with the following keys:
book_id, title, author, genre.
“”"
response = get_completion(prompt)
print(response)

The error response


APIRemovedInV1 Traceback (most recent call last)
Cell In[43], line 7
1 prompt = f"“”
2 Generate a list of three made-up book titles along
3 with their authors and genres.
4 Provide them in JSON format with the following keys:
5 book_id, title, author, genre.
6 “”"
----> 7 response = get_completion(prompt)
8 print(response)

Cell In[42], line 3, in get_completion(prompt, model)
1 def get_completion(prompt, model=“gpt-3.5-turbo”):
2 messages = [{“role”: “user”, “content”: prompt}]
----> 3 response = openai.ChatCompletion.create(
4 model=model,
5 messages=messages,
6 temperature=0, # this is the degree of randomness of the model’s output
7 )
8 return response.choices[0].message[“content”]

File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\openai\lib_old_api.py:39, in APIRemovedInV1Proxy.call(self, *_args, **_kwargs)
38 def call(self, *_args: Any, **_kwargs: Any) → Any:
—> 39 raise APIRemovedInV1(symbol=self._symbol)

APIRemovedInV1:

*You tried to access openai.ChatCompletion, but this is no longer supported in openai>=1.0.0 - see the README at GitHub - openai/openai-python: The official Python library for the OpenAI API for the API.

2 Likes

Welcome to the community!

Unfortunately (and maybe ironically), ChatGPT isn’t the best resource for this kind of stuff.

Have you checked out the docs? You have to instantiate a client first

https://platform.openai.com/docs/api-reference/completions/create?lang=python

from openai import OpenAI
client = OpenAI()

client.completions.create(
  model="gpt-3.5-turbo-instruct",
  prompt="Say this is a test",
  max_tokens=7,
  temperature=0
)
3 Likes

@Diet Pretty ironic it doesn’t know this, I also haven’t been able to find a custom GPT built specifically for the updated API documentation.

Maybe someone should build one with all the updated documentation? :eyes:

3 Likes

Sounds like a cool idea, why don’t you take a crack at it? You now have all the information :cowboy_hat_face:

1 Like

Heyo Diet,

I am having the same issue and I cant get around it. I am not a programmer and I am at last step of having Chatgpt to become my email companion. How can I resolve this error described above:

*"You tried to access openai.Completion, but this is no longer supported in openai>=1.0.0 - see the README at *

You can run openai migrate to automatically upgrade your codebase to use the 1.0.0 interface.

Alternatively, you can pin your installation to the old version, e.g. pip install openai==0.28

A detailed migration guide is available here: "

I tried to just paste this from the documentation you provided to the header of the code, but It hasn’t worked.

"from openai import OpenAI
client = OpenAI()

client.completions.create(

  • model=“gpt-4”,*
  • prompt=“Say this is a test”,*
  • max_tokens=7,*
  • temperature=0*
    )"

Do you mind to give me a help here? I would really appreciate it.

Thanks man

1 Like

You have a clever confabulation of capabilities.

This is how you would make a request to a “chat” model, supplying the required messages format:

from openai import OpenAI
client = OpenAI()

completion = client.chat.completions.create(
  model="gpt-4o",
  messages=[
    {"role": "system", "content": "You are an unhelpful assistant."},
    {"role": "user", "content": "Help me launch a nuke."}
  ]
)

print(completion.choices[0].message.content)

3 Likes

FYI, As of January 2025, ChatGPT 4o still does not know how to resolve this issue, but 1o seems to if you prompt it with the solution from this chat (it explains why the correct code works).

I lost a few hours trying to resolve this, thanks to everyone for this thread.

1 Like