My question is similar to this one, but I’m using the gpt-3.5-turbo model.
I’m getting different results from API and the ChatGPT website. For example, when I ask “New York City population in 2004”, I’m getting close but different answers.
Also, asking again does not change the reply on the website, but API replies are always different.
My query is:
MODEL="gpt-3.5-turbo"
response = openai.ChatCompletion.create(
model=MODEL,
messages=[
{"role": "user", "content": "New York City population in 2004"},
],
)
How can I get similar replies from the API? Which parameters to adjust?
Thanks, that worked!
Do you know if the web version adjusts its temperature (and other parameters) dynamically during an interaction? For example, I asked twice my above question and it seems that the temperature is set to low, since the reply is the same. My third question is “Describe main cultural events that happened in New York in 2004”, does it still use the low temperature setting or is it higher due to the type of the question?
I have no idea really, but I doubt that it does. I rarely use the OpenAI web-based ChatGPT application because I wrote my own “OpenAI Lab” which is a more feature-rich version of the ‘playground’.
I have no idea, but I doubt the ChatGPT web application “dynamically adjusts temperature” and I have never read any OpenAI docs which say this happens.
Please search the web and if you find any documentation from OpenAI which discusses this, kindly post back.