I’ve been encountering some issues with chatgpt-3’s output responses. They don’t seem to align with my queries. I’ve tried increasing the max token from 30 to 256 and adjusting the temperature range from 1-7. After some experimentation, I found that setting the temperature to 1 provides kind of a better responses but unfortunately, these changes haven’t resolved the issue. The output responses can be quite strange, sometimes covering unrelated topics like Japanese cars, museums, Ohio clinics, and Amazon AWS.
Hi welcome to the community, Temperature is a parameter that controls the “creativity” or randomness of the text generated by GPT-3. A higher temperature (e.g., 0.7) results in more diverse and creative output, while a lower temperature (e.g., 0.2) makes the output more deterministic and focused. I think you have lower down your temperature( 0.5 or as per your needs) to get more focused results. and you have to set the temperature to under (0-1) ‘0’ means 100% focused & ‘1’ means creative response.
If I may ask you this. So let’s say I’m trying to call the ChatGpt-3 API for a rephrasing or rewriting job, do I use the same curl https://api.openai.com/v1/completions as the endpoint?
I’m so new to ChatGpt API and have read the documentation but still need clarifications as a handbook.
Thanks in advance.