ChatGPT and openal API responses are very different. Though i am using same 3.5 turbo model as of chatGPT UI. API responses are very short and they are not following the prompt correctly I am keeping temperature= 0.7 can someone help me on this. Am I missing some other parameters and if so what would be recommended value for same
Can you share your API code? Try adding a system prompt if you don’t have one.
Try to lower the temperature to 0.3-0.5, so the model will follow your request more strictly. Also did you set max_tokens
? That decides the expected response length.
It would be helpful if you can paste your request.