I am using text-davinci-003 (temperature = 0.1, top_p = 1).
I give it a 1,000 tokens text and ask:
‘Can you identify any bias in the text? Say only Yes or No’
Results are like: 97% NO, 3% YES.
But if I feed the same text but now change temperature to 0.7 and ask:
‘Can you identify any bias in the text?’
Results are much more elaborate of course, but half of them or more are like ‘There is Bias’ or ‘There is Some Bias’ and so on.
Can I interpret the first batch of results as being ‘conservative’, e.g. it was really looking for some hard evidence of bias and the second batch of results as being more ‘democratic’ due to the increase in temperature?