I received the response below from GPT-3 (gpt-3.5-turbo-0125). From this, I would take Temp of 0.7 and Top-p of 0.9 as the settings most appropriate to use in subsequent queries in the Fine Tuned model for the results and randomness implied in the Fine Tuning training process.
Any variation in future queries on the fine tuned model thereafter, would be based on similar considerations as for an untrained model.
This is the approach I intend to adopt. I am not expert and cannot attest to this from wide experience.
fine-tuned models, default settings for Temperature and Top-p
In OpenAI’s fine-tuned models, the default settings for Temperature and Top-p are typically set to specific values that are optimized for balanced and high-quality text generation. Here are the typical default settings:
-
Temperature: Generally, the default temperature setting in OpenAI’s fine-tuned models is around 0.7. Temperature controls the randomness of the generated text. Lower temperatures result in more deterministic output, while higher temperatures lead to more diversity and randomness in the generated text.
-
Top-p (nucleus sampling): The default setting for Top-p is often around 0.9. Top-p (nucleus sampling) controls the diversity of the generated text by restricting the sampling to the most likely tokens whose cumulative probability exceeds a certain threshold (p). This helps in avoiding low-quality or nonsensical generated text.
These default settings are designed to provide a good balance between coherence and diversity in text generation. However, you can always customize these parameters based on your specific use case and requirements to achieve the desired output quality.