I don’t think that’s the intention of the documentation.
The documentation says
The assistant’s reply can be extracted with:
and it is referring to the code above it:
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who won the world series in 2020?"},
{"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
{"role": "user", "content": "Where was it played?"}
]
)
Running the example code, it is not the case that the assistant’s reply can be extracted with: