Why is there a difference in ChatGPT web version vs gpt 3.5 api model (gpt-3.5-turbo /text-davinci-003)
I am trying to replicate and get the same experience in terms of response, while I am using the ChatGPT web version on one side and comparing the response with API version
response = openai.Completion.create(
model=“text-davinci-003”,
prompt=txt,
temperature=1,
max_tokens=1000,
)
The response coming from ChatGpt web version is far more advanced compared to text-davinci-003 model or gpt-3.5-turbo.
Whats the reason behind this? Does ChatGpt web version have models that are more trained and not exposed to us? Do we have to playaround with model parameters like temperature, frequency penalty, presence penalty…? or something else?
3 Likes
To get results similar to ChatGPT you should use the Chat Completions endpoint and the gpt3.5-turbo model, you should also ensure that the model has the previous chat context to replicate the results.
def stream_openai_response(prompt):
response = openai.ChatCompletion.create(
model='gpt-3.5-turbo',
messages=[{"role": "user", "content": prompt}],
max_tokens=1000,
)
1 Like
I believe the web version is newly trained as of May 24, but the API version is still from March. Not official of course…
I tried gpt-3.5-turbo model as well but there is still a difference in behaviour comparing both approaches,
Yes I guess so, but can we get access to these newly trained models?
ChatGPT should be using gpt-3.5-turbo.
This is just speculation but there could be a bit more steering on the system
role. I added things like
- Your responses should be informative, visual, logical and actionable.
- If suited to the conversation, you may generate short suggestions for the next user turns that are relevant to the conversation.
- You do not generate generic suggestions for the next user turn, such as “Thank you.”
(source for some of these: https://twitter.com/kliu128/status/1623472922374574080 )
I believe on ChatGPT, it’s more finely steered to be more newbie friendly for beginners who have no experience with AI or prompt engineering, as well as deal with trolls who are trying to make it look bad.
Whereas those using the API would want something more flexible and we can assume they have some experience.
1 Like