I get different answers to the same request. I am sharing screenshots.
Can you help me?
I’ll be glad to jump in and talk about what you might be seeing.
Firstly, AI language models at their default settings are not assembled to create identical outputs from each run of identical inputs. It was found pretty early that natural writing is filled with lexical lesions, elusive ethereal emanations, and contusions of corpus, the bruising of banter being part of the painting of a human richness by grinding of grammatical gears of our wetware wizardry.
An AI that has assembled world languages into a massive understanding of probabilities, and only patterns its course through a sentence on the most likely path however, is quite artificial to read, dry, and becomes at least identifiable.
Therefore sampling is used, assigning those internal probabilities of word generations (tokens) to the likelihood they will appear in output. When the chance a response begins with “I’ll” is only 40% certain, it will only appear in 40% of the times you run that same input.
However, you may want to test if you are really talking to the same model and achieving the same input. For that, and other cases where you wish to constrain the creativity of unexpected words appearing, two parameters are offered by OpenAI: temperature, and top-p. Reducing from API defaults of 1.0 for each gets less diversity in output language.
For the simplest answer: set top_p=0.0001 and you will get outputs that are word-for-word identical almost every time.
Prompt instruction, Temperature and top_p settings need to be set based on the scenario.
One example - try setting the temperature to zero - to be certain and deterministic all the time to get the same response. Look here for some examples - https://platform.openai.com/docs/api-reference/audio
If you are still not getting a suitable response, try experimenting with the prompt instructions. Open AI provides a lot of strategies - look here: https://platform.openai.com/docs/guides/prompt-engineering