'OpenAI' object has no attribute 'Completion'

import hashlib
import redis
from openai import OpenAI
import os
from dotenv import load_dotenv

load_dotenv()
client = OpenAI()
OpenAI.api_key = os.getenv('OPENAI_API_KEY')
def call_openai_api(prompt, model="gpt-4"):
    try:
        chat_completion = client.completions.create(
            model=model,  # Use the specified model
            messages=[{"role": "user", "content": prompt}],  # Send the prompt as a message
        )
        return chat_completion.choices[0].message.content.strip()
    except Exception as e:
        return f"Error occurred: {str(e)}"

I got this part in my code snippet, i modify the method in my code, but terminal still show the same error. Can anyone explain for me, please? Iā€™m new one

Here is a basic example:

from openai import OpenAI

client = OpenAI()

PROMPT = "give me a random poem"

def call_openai_api():

    response = client.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": PROMPT}],
        temperature=1,
        max_tokens=300,
        top_p=1,
        frequency_penalty=0,
        presence_penalty=0
    )

    print(response.choices[0].message.content.strip())

if __name__ == "__main__":
    call_openai_api()

save it as openai_test.py

Then you typically create a virtual env (optional but suggested)

python3 -m venv venv
source venv/bin/activate

And you will need to export the OPENAI_API_KEY - e.g. on ubuntu you do:

export OPENAI_API_KEY=sk....your_key_here

There is no need to explicitly set that for the OpenAI object - it just reads that OPENAI_API_KEY env variable per default.

and then you can install the requirement / openai lib and run the script

pip install openai
python3 openai_test.py

you can deactive the venv by just typing

deactivate
2 Likes
Response: Error occurred: 'ChatCompletion' object is not subscriptable

After modification as your suggestion, i got this in my main.py. Test with your suggestion was good, just me code has some issues

I did not suggest a modification, I have posted a working example.

Could you show your modified code?

import hashlib
import redis
from openai import OpenAI
import os

client = OpenAI()

# Connect to Redis (update with your Redis server details)
redis_client = redis.StrictRedis(host='localhost', port=6379, decode_responses=True)

# Function to generate cache key from the prompt (hash the prompt)
def generate_cache_key(prompt):
    return hashlib.sha256(prompt.encode('utf-8')).hexdigest()

# Function to get the cached response from Redis
def get_cached_response(prompt):
    cache_key = generate_cache_key(prompt)  # Generate unique cache key for the prompt
    cached_response = redis_client.get(cache_key)  # Retrieve the response from Redis
    return cached_response

# Function to store the prompt response in Redis
def cache_prompt_response(prompt, response):
    cache_key = generate_cache_key(prompt)  # Generate the cache key for the prompt
    redis_client.setex(cache_key, 3600, response)  # Store with TTL of 1 hour

# Function to call OpenAI API and get the response based on model
def call_openai_api(prompt, model="gpt-4"):
    try:
        # Call the chat completion endpoint with required arguments
        response = client.chat.completions.create(
            model=model,
            messages=[
                {"role": "system", "content": "You are a helpful assistant."},
                {"role": "user", "content": prompt}
            ]
        )
        print(response.choices[0].message.content.strip())
    except Exception as e:
        return f"Error occurred: {str(e)}"

# Main function to handle the caching and API call
def process_prompt(prompt, model="gpt-4"):
    # Check if the response is already cached in Redis
    cached_response = get_cached_response(prompt)
    
    if cached_response:
        print("Cache hit! Using cached response.")
        return cached_response
    else:
        print(f"Cache miss. Calling OpenAI API with model: {model}...")
        # Call OpenAI API and cache the response
        response = call_openai_api(prompt, model)
        cache_prompt_response(prompt, response)
        return response

# Example usage
if __name__ == '__main__':
    prompt = "How does AI work?"  # Replace with your prompt
    model = "gpt-4"  # Replace with any available OpenAI model like "gpt-4", "gpt-3.5-turbo", etc.
    
    response = process_prompt(prompt, model)
    print("Response:", response)

This is what iā€™m trying to do. I still be stuck here for a while

The method doesnā€™t have a return value so where should it take the response from then?

I have asked chatgpt "

here is some code:

[your code]

what is wrong?
"

and it came up with this here:

import hashlib
import redis
import openai
import os
import logging

# Set up logging
logging.basicConfig(level=logging.INFO)

# Initialize OpenAI API key
openai.api_key = os.getenv("OPENAI_API_KEY")

# Connect to Redis
redis_client = redis.StrictRedis(host='localhost', port=6379, decode_responses=True)

# Function to generate cache key from the prompt
def generate_cache_key(prompt):
    return hashlib.sha256(prompt.encode('utf-8')).hexdigest()

# Function to get the cached response from Redis
def get_cached_response(prompt):
    cache_key = generate_cache_key(prompt)
    return redis_client.get(cache_key)

# Function to store the prompt response in Redis
def cache_prompt_response(prompt, response):
    if response:  # Ensure response is not None
        cache_key = generate_cache_key(prompt)
        redis_client.setex(cache_key, 3600, str(response))

# Function to call OpenAI API
def call_openai_api(prompt, model="gpt-4"):
    try:
        response = openai.ChatCompletion.create(
            model=model,
            messages=[
                {"role": "system", "content": "You are a helpful assistant."},
                {"role": "user", "content": prompt}
            ]
        )
        return response.choices[0].message.content.strip()
    except Exception as e:
        logging.error(f"Error occurred: {str(e)}")
        return None

# Main function to process the prompt
def process_prompt(prompt, model="gpt-4"):
    cached_response = get_cached_response(prompt)
    if cached_response:
        logging.info("Cache hit! Using cached response.")
        return cached_response
    else:
        logging.info(f"Cache miss. Calling OpenAI API with model: {model}...")
        response = call_openai_api(prompt, model)
        if response:
            cache_prompt_response(prompt, response)
        return response or "An error occurred while processing your request."

# Example usage
if __name__ == '__main__':
    prompt = "How does AI work?"
    model = "gpt-4"
    response = process_prompt(prompt, model)
    print("Response:", response)

which also fixes some other stuff beside the missing return

ohā€¦ upon a short review I saw it introduced some new bugs as well. typical programmerā€¦

letā€™s debug that real quickā€¦

import hashlib
import redis
import openai
import os
import logging

# Set up logging
logging.basicConfig(level=logging.INFO)

looks ok to meā€¦

that is not needed

it also downgraded this here to the old version of openai lib (i think it was 0.29.0 after that it changed) - so we have to integrate the stuff that I posted in the running code.

    response = client.chat.completions.create(
        model="gpt-4",
        messages=[
                {"role": "system", "content": "You are a helpful assistant."},
                {"role": "user", "content": prompt}
        ],
        temperature=1,
        max_tokens=300,
        top_p=1,
        frequency_penalty=0,
        presence_penalty=0
    )

It kept show the same error as before. Like this ā€˜Response: Error occurred: ā€˜ChatCompletionā€™ object is not subscriptableā€™

Well, maybe use my example - then add letā€™s say just a prompt variable as parameter to the function and work your way up instead of refactoring it in a way that you donā€™t know which change caused the error?

Remove the ā€œstrip()ā€ from the pydantic methods that get the response content out.

Better, use multiple steps to get parts of the response as submodels to do further extraction on:

response_0 = response.choices[0]
...

You can do that unnecessary whitespace stripping after you have set a string variable with a strip() str method.

You canā€™t do dictionary[ā€˜key_nameā€™] extraction on the response object, unless you do a full .model_dump() on it to serialize to a new object.

Dump all that caching idea for the random chance you have the same input, and just start with the API reference example for making an API call to see how you can first achieve success.

Like this:

from openai import OpenAI

client = OpenAI()

PROMPT = "give me a random poem"

def call_openai_api():

    response = client.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": PROMPT}],
        temperature=1,
        max_tokens=300,
        top_p=1,
        frequency_penalty=0,
        presence_penalty=0
    )

    print(response.choices[0].message.content.strip())

if __name__ == "__main__":
    call_openai_api()

first we run it and when it runs we will add parameters to the function

from openai import OpenAI

client = OpenAI()

def call_openai_api(prompt):

    response = client.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": prompt}],
        temperature=1,
        max_tokens=300,
        top_p=1,
        frequency_penalty=0,
        presence_penalty=0
    )

    print(response.choices[0].message.content.strip())

if __name__ == "__main__":
    call_openai_api('write a poem')

then introduce a return value to the function instead of letting it just print the result

from openai import OpenAI

client = OpenAI()

def call_openai_api(prompt):

    response = client.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": PROMPT}],
        temperature=1,
        max_tokens=300,
        top_p=1,
        frequency_penalty=0,
        presence_penalty=0
    )

    # here we add the return instead of printing the response directly
    return response.choices[0].message.content.strip()

if __name__ == "__main__":
   response_text=call_openai_api('write a poem')
   print(response_text)

then in the next step you would add a function that checks for a cached value

from openai import OpenAI

client = OpenAI()

# A simple in-memory cache for demonstration purposes
cache = {}

def check_cache(prompt):
    """Checks if the response for the given prompt is cached."""
    if prompt in cache:
        print("Cache hit")
        return cache[prompt]
    print("Cache miss")
    return None

def call_openai_api(prompt):
    """Calls the OpenAI API to get a response for the given prompt."""
    # Check if the response is cached
    cached_response = check_cache(prompt)
    if cached_response is not None:
        return cached_response

    # If not cached, make an API call
    response = client.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": prompt}],
        temperature=1,
        max_tokens=300,
        top_p=1,
        frequency_penalty=0,
        presence_penalty=0
    )

    # Extract the response text
    response_text = response.choices[0].message.content.strip()

    # Cache the response for future use
    cache[prompt] = response_text

    return response_text

if __name__ == "__main__":
    user_prompt = 'write a poem'
    response_text = call_openai_api(user_prompt)
    print(response_text)

and then maybe add a function that fills the cache with a SUCCESSFUL response (you donā€™t want to cache unsuccessful calls)
ā€¦

and so on

Please run your bot output against the API and see that it actually worksā€¦

I just posted one bot output - and I have then pointed out what it made wrong.

this worksā€¦

Ok, I must admit that adding that in memory cache canā€™t workā€¦ since the script is restarted with a fresh empty cache each timeā€¦ so from here on redis should be used or at least a cache fileā€¦

or just for demonstration we could run the function twice with the same prompt

from openai import OpenAI

client = OpenAI()

# A simple in-memory cache for demonstration purposes
cache = {}

def check_cache(prompt):
    """Checks if the response for the given prompt is cached."""
    if prompt in cache:
        print("Cache hit")
        return cache[prompt]
    print("Cache miss")
    return None

def call_openai_api(prompt):
    """Calls the OpenAI API to get a response for the given prompt."""
    # Check if the response is cached
    cached_response = check_cache(prompt)
    if cached_response is not None:
        return cached_response

    # If not cached, make an API call
    response = client.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": prompt}],
        temperature=1,
        max_tokens=300,
        top_p=1,
        frequency_penalty=0,
        presence_penalty=0
    )

    # Extract the response text
    response_text = response.choices[0].message.content.strip()

    # Cache the response for future use
    cache[prompt] = response_text

    return response_text

if __name__ == "__main__":
    user_prompt = 'write a poem'
    response_text = call_openai_api(user_prompt)
    print(response_text)
    response_text = call_openai_api(user_prompt)
    print(response_text)

and here we go:

(venv) (base) jochen@dev1:~/projects/tests$ python openaitest.py
Cache miss
Upon the canvas of a starlit night,
Dancing dreams twirl in the pale moonā€™s light.
Billowed whispers of the weeping willows,
Silent symphony on feathered pillows.

Stars akin to diamonds, scattered across the velvet sky,
Illuminating stories that were never by and by.
Lost in the cosmic ballet, in universe we twine,
Heartbeat whispers echo through the corridors of time.

Against a backdrop of infinity, ceaselessly we roam,
In search of fleeting moments, we dare to call our own.
Under the watchful eyes of wandering constellations,
Spinning tales of love, life, and quiet contemplations.

Sapphire seas roaring with tales untold,
Reflecting vibrant stories of the sun, bold.
Dressed in the golden hue of the morning sun,
Daylight awakes, and nighttime is done.

The sun stands proudly, lord of the azure above,
While flowers bloom, an ode to lifeā€™s undying love.
Witness to the eternal loop of day and night,
The solitary moon glistens, bathing the world in silver light.

The poet weaves words into grand tapestries,
Heartstrings strumming to loveā€™s enchanting melodies.
From life, a poem emerges, quiet and profound,
In every breath, a whisper of beauty found.

Cache hit <<<< that worked

Upon the canvas of a starlit night,
Dancing dreams twirl in the pale moonā€™s light.
Billowed whispers of the weeping willows,
Silent symphony on feathered pillows.

Stars akin to diamonds, scattered across the velvet sky,
Illuminating stories that were never by and by.
Lost in the cosmic ballet, in universe we twine,
Heartbeat whispers echo through the corridors of time.

Against a backdrop of infinity, ceaselessly we roam,
In search of fleeting moments, we dare to call our own.
Under the watchful eyes of wandering constellations,
Spinning tales of love, life, and quiet contemplations.

Sapphire seas roaring with tales untold,
Reflecting vibrant stories of the sun, bold.
Dressed in the golden hue of the morning sun,
Daylight awakes, and nighttime is done.

The sun stands proudly, lord of the azure above,
While flowers bloom, an ode to lifeā€™s undying love.
Witness to the eternal loop of day and night,
The solitary moon glistens, bathing the world in silver light.

The poet weaves words into grand tapestries,
Heartstrings strumming to loveā€™s enchanting melodies.
From life, a poem emerges, quiet and profound,
In every breath, a whisper of beauty found.

1 Like

Thanks a lot by your helps. I finished it

2 Likes