Trying to get chat completion API to work yet receiving a AttributeError: text, how do I fix this?

Hi I am trying to run my code that integrates with the chat completions API (OpenAI API) however when I try to submit a prompt to it it throws this error:

AttributeError: text

This error is specific to this part of the code :

Extract the generated response text from the API response

chat_response = response.choices[0].text.strip()

How do I fix this so that all of my code runs and I am able to generate chat responses?

Here is all of my code:
app = Flask(name)

Set your OpenAI API key here

openai.api_key = “sk-my-api-key”

Define the prompt and chat history

prompt = “prompt”
messages = [
{“role”: “system”, “content”: “What is the capital of France?”},
{“role”: “user”, “content”: “The capital of France is Paris.”},
{“role”: “assistant”, “content”: “The Los Angeles Dodgers won the World Series in 2020.”},
{“role”: “user”, “content”: “Where was it played?”}
]

Define a model and engine

model_engine = “gpt-3.5-turbo”

Generate the response using the OpenAI API

response = openai.ChatCompletion.create(
model=model_engine,
temperature=0.7,
max_tokens=1024,
n=1,
stop=None,
messages=messages,
)

Print the chatbot response

print(messages)

@app.route(“/”)
def index():
return render_template(“index.html”)

@app.route(“/chat”, methods=[“POST”])
def chat():
# Get the user’s prompt from the form data
prompt = request.form[“prompt”]

# Define the chat history
chat_history = [{'role': 'system', 'content': "What is the capital of France?"}, 
                {'role': 'user', 'content': "The capital of France is Paris."}, 
                {'role': 'assistant', 'content': "The Los Angeles Dodgers won the World Series in 2020."}, 
                {'role': 'user', 'content': "Where was it played?"}]

# Generate the response using the OpenAI API
response = openai.ChatCompletion.create(
    model="gpt-3.5-turbo",
    temperature=0.7,
    max_tokens=1024,
    n=1,
    stop=None,
    messages=messages,
)

# Extract the generated response text from the API response
chat_response = response.choices[0].text.strip()

# Render the chatbot response template with the response text
return render_template("chat.html", prompt=prompt, response=response)

if name == “main”:
app.run(debug=True)

The response format is different than for the standard text completions (see this example from the docs):

{
 'id': 'chatcmpl-6p9XYPYSTTRi0xEviKjjilqrWU2Ve',
 'object': 'chat.completion',
 'created': 1677649420,
 'model': 'gpt-3.5-turbo',
 'usage': {'prompt_tokens': 56, 'completion_tokens': 31, 'total_tokens': 87},
 'choices': [
   {
    'message': {
      'role': 'assistant',
      'content': 'The 2020 World Series was played in Arlington, Texas at the Globe Life Field, which was the new home stadium for the Texas Rangers.'},
    'finish_reason': 'stop',
    'index': 0
   }
  ]
}

So instead of

chat_response = response.choices[0].text.strip()

You need to get

chat_response = response.choices[0].message.content.strip()

Thank you, I’ll try is and let you know if it works

now my response is always the same:
{ “choices”: [ { “finish_reason”: “stop”, “index”: 0, “message”: { “content”: “The 2020 World Series was played in Arlington, Texas at Globe Life Field.”, “role”: “assistant” } } ], “created”: 1684060885, “id”: “chatcmpl-7G3SHBIWSvqBBoBlNSAtiKn0944fO”, “model”: “gpt-3.5-turbo-0301”, “object”: “chat.completion”, “usage”: { “completion_tokens”: 17, “prompt_tokens”: 55, “total_tokens”: 72 } }

What do I add for ID and created?

Ah, no no you don’t use this whole thing. It was just an example response.

Here is your code updated, try this new version, exactly like this:

# Define the chat history
chat_history = [{'role': 'system', 'content': "What is the capital of France?"}, 
                {'role': 'user', 'content': "The capital of France is Paris."}, 
                {'role': 'assistant', 'content': "The Los Angeles Dodgers won the World Series in 2020."}, 
                {'role': 'user', 'content': "Where was it played?"}]

# Generate the response using the OpenAI API
response = openai.ChatCompletion.create(
    model="gpt-3.5-turbo",
    temperature=0.7,
    max_tokens=1024,
    n=1,
    stop=None,
    messages=chat_history, # This also needed changing, as you weren't using your own history above
)

# Extract the generated response text from the API response
chat_response = response.choices[0].message.content.strip() # This is what's different from using non-chat completions

# Render the chatbot response template with the response text
return render_template("chat.html", prompt=prompt, response=response)

Its still giving me a response on the front end, that looks like this: Chatbot: { “choices”: [ { “finish_reason”: “stop”, “index”: 0, “message”: { “content”: “I’m sorry, I didn’t see your notes. Please provide me with your notes and I will be happy to create a presentation for you.”, “role”: “assistant” } } ], “created”: 1684066304, “id”: “chatcmpl-7G4rgMtktUBXSkgRw7kH2U7pcSSTP”, “model”: “gpt-3.5-turbo-0301”, “object”: “chat.completion”, “usage”: { “completion_tokens”: 29, “prompt_tokens”: 118, “total_tokens”: 147 } }

This is my code now:

app = Flask(__name__)

# Set your OpenAI API key here
openai.api_key = "sk-my-qpi-key"

# Define the prompt and chat history
prompt = "make a presentation from my notes"
chat_history = [
    {"role": "system", "content": "I take your notes and thematically organize them into a presentation, breaking the content down into individual slides."},
    {"role": "user", "content": "Please take these notes a make a multi-slide presentation for me."},
    {"role": "assistant", "content": "I write organized and easy to read presentations, that organize your slide notes into individual slides with headers and body conent."},
    {"role": "user", "content": "can you create a presentation for me, breaking my notes down into slides with headers and body content for each slide?"},
    {"role": "assistant", "content": "Of course please provide me with your notes."}
]

# Define a model and engine
model_engine = "gpt-3.5-turbo"

# Generate the response using the OpenAI API
response = openai.ChatCompletion.create(
    model=model_engine,
    temperature=0.7,
    max_tokens=1024,
    n=1,
    stop=None,
    messages=chat_history,
)

# Print the chatbot response
print(chat_history)

@app.route("/")
def index():
    return render_template("index.html")


@app.route("/chat", methods=["POST"])
def chat():
    # Get the user's prompt from the form data
    prompt = request.form["prompt"]

    # Define the chat history
    chat_history = [{"role": "system", "content": "I take your notes and thematically organize them into a presentation, breaking the content down into individual slides."},
                    {"role": "user", "content": "Please take these notes a make a multi-slide presentation for me."},
                    {"role": "assistant", "content": "I write organized and easy to read presentations, that organize your slide notes into individual slides with headers and body conent."},
                    {"role": "user", "content": "can you create a presentation for me, breaking my notes down into slides with headers and body content for each slide?"},
                    {"role": "assistant", "content": "Of course please provide me with your notes."}]

    # Generate the response using the OpenAI API
    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        temperature=0.7,
        max_tokens=1024,
        n=1,
        stop=None,
        messages=chat_history,
    )

    # Extract the generated response text from the API response
    chat_response = response.choices[0].message.content.strip()

    # Render the chatbot response template with the response text
    return render_template("chat.html", prompt=prompt, response=response)

if __name__ == "__main__":
    app.run(debug=True)

Its generating a response, but i dont think it is generating a response from my prompt, only the chat history, I have a simply HTML page where i can enter a prompt, in this case presentation notes, and it does not seem to read the notes and make a presentation from them, intstead it makes a presenattion about a random topic. I would like it to make a presentaiton from my notes lol

Actually to be honest. I’ve been testing it. I don’t think it’s receiving my prompt at all. As each time I test it is responds to me based on exactly where we left off in our conversation in the chat_history. It is never able to generate a pre sooner related to any content I attempt to submit. Is there a reason for this? How do I fix it?

You’re not actually adding your prompt to the chat_history that you are sending to OpenAI. How is openAI supposed to be aware of your prompt this way?

You need to add your prompt as another user type message to the array, before you add it to the openai call.

Look, I think you need a proper programmer. This forum is not going to help you rewrite your entire program: the problems you’re highlighting have nothing to do with openAI, but with the rest of your code, so I would advise you to get help from a programmer to build your application.