Temperature, top_p and top_k for chatbot responses

Great write-up!

  • I’ll need to try some sample prompts with a few different settings.
  • Any thoughts on setting both temperature and top-p to non-default values (despite recommendations)?

Some notes from my testing, mainly writing code for a specific task, using:

  • Chat Completions
  • gpt-3.5-turbo

Even with the prompt significantly massaged, some instructions in the System Prompt are ignored.

Increasing the temperature to 1.5 almost always gets me the expected behavior - although repeated calls are much less reliable, and the overall cohesion of the answer is compromised.

It’s possible my prompts just need to be streamlined further - get more specific, a little more verbose?

Several calls with the same Prompts usually gets me enough good code to get around any mistakes.

This is not ideal, and I will be trying the configs you have mentioned.