Achieving deterministic API output on language models - HOWTO

Not true in all cases. I’ve created a small script to test the difference between the temperature and the top_p parameters.

Model gpt-3.5-turbo returned different responses even with top_p = 0.00000000000001 as you suggested. This behaviour was exactly the same when using temperature=0 or top_p=0.

Curiously, I received different outputs only when asked in portuguese. When my prompt was in english, the responses were all the same.

Here is the javascript that I used to test and received different outputs:

const OpenAI = require("openai");

(async function () {
  openai = new OpenAI({
    apiKey: "xxxxxxxxxxxxxxxxxxxxxx",
  });

  for (let i = 1; i <= 50; i++) {
    openai.chat.completions
      .create({
        messages: [
          { role: "system", content: "Você é um assistente pessoal" },
          { role: "user", content: "Me diga uma coisa bem aleatória com até 100 caracteres" },
        ],
        model: "gpt-3.5-turbo",
        //   temperature: 0,
        top_p: 0.0000000000000000000001,
      })
      .then((result) => {
        console.log(result.choices[0].message.content);
      });
  }
})();