openai.Completion.create give error python

Source:

    completion = openai.Completion.create(
    engine="text-davinci-003",
    prompt=thread_title,
    max_tokens=2048,
    temperature=0.5,
    top_p=1,
    frequency_penalty=0,
    presence_penalty=0
)

Error:

  File "C:\Users\admin\Documents\APL\functions.py", line 11, in add_comment
    completion = openai.Completion.create(
AttributeError: module 'openai' has no attribute 'Completion'. Did you mean: 'completions'?

I encountered a similar issue, where I received the following error message: ā€œAttributeError: module ā€˜openaiā€™ has no attribute ā€˜Completionā€™. Did you mean: ā€˜completionsā€™?ā€ while working with the OpenAI library, version 1.1.1.

Here is the code snippet in question:

def generate_revised_content(content):
response = openai.Completion.create(
engine=ā€œdavinciā€,
prompt=content,
max_tokens=150, # You can adjust the response length
)
return response.choices[0].text

completion = openai.completions.create(
model=ā€œtext-davinci-003ā€,
prompt=thread_title,
max_tokens=2048,
temperature=0.5,
top_p=1,
frequency_penalty=0,
presence_penalty=0
)
you may updated your library.
some of the call signatures have changed.
Hope this helps!

1 Like

It works now! Iā€™ve made some adjustments to the code, and itā€™s up and running smoothly. Hereā€™s the updated code snippet:

def generate_revised_content(content):
    response = openai.completions.create(
        model="davinci",  # Your desired model
        prompt=content,
        max_tokens=30000,  # Extended for longer responses
        temperature=0.5,  # Adjust for creativity
        top_p=1,  # Control response diversity
        frequency_penalty=0,  # Fine-tune word frequency
        presence_penalty=0  # Fine-tune word presence
    )
    return response.choices[0].text

This time, I encountered a different issue ā€“ a rate limit error.
But itā€™s fine; at least isolation was achieved, Thanks for your input.

Iā€™m a bit curious as to why youā€™re using Davinci, and trying to use it with max_tokens of 30000, when the modelā€™s context limit should be far less than that. 2048, iirc?