I am unable to replicate the results I get by using chatgpt plus chatbot user interface- default mode with the chat completion api for gpt-4 model. What model is being used in chatgpt user interface. Here is the prompt i am using to test in chatbot ui vs completion api. Chatbot interface gets me the right result. Completion API gives me the wrong result.
—Prompt–
I will provide you with a description of a parallel line based geometric figure. Please note that alternate interior angles are equal to each other. Alternate exterior angles are equal to each other. Alternate interior angles are not equal to alternate exterior angles. Here is the problem description: Lines s and t are two parallel lines. line s runs above line t. Line c traverses lines s and t. Alternate interior angle that is formed on the top left side of the intersection of line c and line t is 110 degree. Alternate exterior angle that is formed on the top right side when line c intersects line s is x degrees. Do you understand this figure ? If so, calculate the value of x. Please perform this calculation step by step. Provide explanation at each step
You’ll want to use Chat Completion API with gpt-3.5-turbo
or gpt-4
(if you have access to it, if not, access is rolling out later this month) to be close to ChatGPT. Can you share your code?
I use gpt-4 in my api. GPT_MODEL = “gpt-4”
Try decreasing the temperature
, maybe start with 0.3 and play around from there. I assume your actual goal is to get the correct answer, not just to literally match ChatGPT, correct?
Yes. Trying to consistently get the correct answer. Tried different temperature settings in the API . Also played around with different GPT Model 4 version in OpenAI playground with no luck.
I just want to clarify, because your initial concern was you were
Does the gpt-4
in ChatGPT Plus,
If so, I believe the default temperature is 0.7
.
My next question is what system
message you are sending with your request?
I am not including any system message. No luck with 0.7 temparature
In a quick test, ChatGPT GPT-4 said 70 degrees 4 out of 5 times, and 110 once
With temperature=0.3
…
gpt-4-0314
said 70 degrees 5 out of 5 timesgpt-4-0613
(currently same asgpt-4
) said 70 three times and 110 twice
With a cleaner prompt, gpt-4-0613
/ gpt-4
said 70 degrees 4 out of 5 times.
(Disclaimer: I didn’t verify any of the math output, just looking at final answer)
Thank you. 70 is the correct answer. I will try cleaning up the prompt and retrying
Though it may not seem relevant, I am going to recommend you send whatever the current standard system message is in order to remove as many variables as possible.
Sending a system message may just help the model get into a good “mindset” going into the problem.
Makes sense. I will do some more testing. Thank you for your advice.