Hello! I’m working on a project where I would prefer to use completion models as apposed to chat models. I read in a different topic that it was possible to do this by setting up an assistant message and then forcing it to generate another assistant message (see below).
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-3.5-turbo-16k",
messages=[
{
"role": "assistant",
"content": [
{
"type": "text",
"text": "The weather outside is "
}
]
}
],
temperature=1,
max_tokens=256,
top_p=1,
frequency_penalty=0,
presence_penalty=0
)
# sunny and warm.
However, when I drop in a different model name, I get different behaviour.
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4",
messages=[
{
"role": "assistant",
"content": [
{
"type": "text",
"text": "The weather outside is "
}
]
}
],
temperature=1,
max_tokens=256,
top_p=1,
frequency_penalty=0,
presence_penalty=0
)
# As a language model AI developed by OpenAI, I don't have real-time information or access to current data, so I can't provide you with the current weather. Please use a weather website or application to get this information.
While I can force it to behave like a completion model by setting a system message,
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4",
messages=[
{
"role": "system",
"content": [
{
"type": "text",
"text": "You must act like a legacy completion model. "
}
]
},
{
"role": "assistant",
"content": [
{
"type": "text",
"text": "The weather outside is "
}
]
},
{
"role": "assistant",
"content": [
{
"type": "text",
"text": "quite cold today. Make sure you bundle up if you're going out! It's expected to remain cloudy throughout the day with the possibility of some light snow in the late afternoon. As for the temperature, it's around 32 degrees Fahrenheit. Remember to stay warm and safe!"
}
]
}
],
temperature=1,
max_tokens=256,
top_p=1,
frequency_penalty=0,
presence_penalty=0
)
# quite chilly today. It would be a good idea to wear warm clothes if you're planning to go out.
I’m not certain this would be as reliable as the previous approach with GPT-3. Is there a way to ‘trick’ the API into behaving like a legacy completion model that will be robust?