The "seed" option for GPT does not increase the determinism level

I’m asking GPT to determine the level of violence (“serious”, “minor”, “none”) based on the provided crime description.
I send two messages:

  1. “system” - with instructions, rules, and examples.
  2. “user” - with a crime description.

Sending the same 2 messages multiple times I receive different responses (“serious” or “minor”) randomly.

model: gpt-3.5-turbo-1106,
temperature: 0,
seed: 1111

I haven’t noticed any difference between using and not using the “seed” parameter.
Am I wrong in understanding this parameter?

1 Like

I expect that seed will reproduce the parts where randomness is involved, such as when you set the temperature to 1.0 and get a different response every time by diverse sampling.

Setting the seed that the randomness of the multinomial sampler uses has it pick the same way again. That would rely on the inputs being the same, though, which they are not.

It cannot control that

  1. temperature of zero is not a deterministic setting (would actually be divide-by-zero),
  2. nor are the logits returned by the AI model of deterministic values either.
  3. nor can you see the logits of chat models, as that would expose the flaws in the architectural underpinnings.

So, theoretically, if I set the temperature > 0 and do not set the seed, I will receive a different result for the same request each time (I suppose some finite number of results).

If I set the temperature > 0 and set the seed as well, I will receive the same result for the same request each time. And different result for the same request with another seed. Basically, each seed will be “related” to a different variant of response.

If I set the temperature == 0 the seed doesn’t affect a response.

Does it make sense?

1 Like

I haven’t experimented with the quality or effects of the setting, but it wouldn’t take much to investigate.

Take temperature = 1.5 - you get very “creative” outputs, and never the same thing twice. The difference between token probabilities is reduced even more than defaults, so very unlikely tokens can be selected and thrown into the generation.

I actually found the newest -1106 model to be wildly out-of-control at temperature 1.5, so had to reduce.

Let’s experiment with some code that will loop on each seed in a list.

from openai import OpenAI
client = OpenAI()
system = [{"role": "system",
           "content": """You are a creative writing expert."""}]
user = [{"role": "user", "content":
         "Produce: a paragraph story about an escaped chimpanzee."}]

for seed in [444, 444, 666, 666]:
    response = client.chat.completions.create(
        messages = system + user,
        model="gpt-3.5-turbo-1106",
        temperature = 1.1, stream=True,
        seed = seed, max_tokens = 100)
    print(f"\n==Response with seed {seed}==")
    for delta in response:
        if not delta.choices[0].finish_reason:
            word = delta.choices[0].delta.content or ""
            print(word, end ="")

With two generations at seed 444, and two more runs at seed 666, what do we get?

==Response with seed {444}==
At the quaint little zoo, there was a mischievous chimpanzee named Popo who had always longed for freedom. One day, as the zookeeper was distracted by a group of visitors, Popo seized his chance and stealthily picked the lock of his enclosure. With glee in his eyes, he swung from branch to branch, evading the bewildered keepers. With freedom at last, Popo embarked on an exhilarating adventure, exploring the vast world beyond the bars with

==Response with seed {444}==
At the quaint little zoo, there was a mischievous chimpanzee named Popo who had always longed for freedom. One day, as the zookeeper was distracted while feeding the other animals, Popo seized his chance and stealthily unlocked his cage. With a mischievous twinkle in his eyes, he swung from tree to tree, avoiding the watchful eyes of the zoo staff. Popo’s heart raced with exhilaration as he made his way out of the zoo and into the

==Response with seed {666}==
In the heart of the lush jungle, a mischievous chimpanzee named Sammy had successfully plotted his great escape from the research facility. With a twinkle of defiance in his eye, he swung effortlessly from tree to tree, reveling in his newfound freedom. As news of his escape spread, the local villagers gossiped about the elusive intruder, whose wit and cunning made him a legend in their eyes. Sammy, clever and resourceful, evaded capture at every turn, using his intelligence

==Response with seed {666}==
In the heart of the lush jungle, a mischievous chimpanzee named Sammy had successfully plotted his great escape from the research facility. With a cunning mind and agile body, he managed to unlock his cage, slip through a small window, and swing off into the depths of the dense foliage. As the sun began to set, the researchers scrambled to find the elusive primate, but Sammy had cunningly left behind a trail of false clues. In the dim twilight, the mischievous chimp

So we’ve accomplished the goal of a seed value: the same wild variation is reproduced again at the start. Except with fault being seen.

The model has underlying non-deterministic faults, so even with a parameter such as top_p = 0.00000001 which allows nothing but selection of the highest-ranking token, on long writings, that “top” can actually switch and give a different token choice occasionally, from the vector math that comes before token sampling.

Thanks for your explanation.
I think I have to learn more about what temperature, topP, and seed exactly are. Because I still don’t understand why I may need to use the seed parameter.
I will be back once have more questions).

Hi ,

I tried using seed in order to produce determinism in the response.

With same seed i am still getting different system figerprints and responses

Here is response#1

{
    "Action": [
        {
            "end_time": "00:04:59,874",
            "start_time": "00:04:09,333"
        }
    ],
    "Comedy": [
        {
            "end_time": "00:02:43,457",
            "start_time": "00:02:27,958"
        },
        {
            "end_time": "00:02:59,000",
            "start_time": "00:02:46,166"
        },
        {
            "end_time": "00:03:14,541",
            "start_time": "00:03:03,500"
        },
        {
            "end_time": "00:03:32,291",
            "start_time": "00:03:15,958"
        },
        {
            "end_time": "00:04:02,999",
            "start_time": "00:03:39,916"
        },
        {
            "end_time": "00:04:14,541",
            "start_time": "00:04:03,791"
        },
        {
            "end_time": "00:05:07,874",
            "start_time": "00:04:59,541"
        },
        {
            "end_time": "00:05:46,540",
            "start_time": "00:05:09,916"
        },
        {
            "end_time": "00:06:00,749",
            "start_time": "00:05:54,541"
        },
        {
            "end_time": "00:06:19,916",
            "start_time": "00:06:02,416"
        },
        {
            "end_time": "00:07:01,207",
            "start_time": "00:06:28,208"
        },
        {
            "end_time": "00:07:07,958",
            "start_time": "00:07:03,541"
        }
    ],
    "Debate": [
    ],
    "Emotion_Drama": [
    ],
    "Love": [
        {
            "end_time": "00:06:48,624",
            "start_time": "00:06:38,333"
        }
    ],
    "Thriller": [
        {
            "end_time": "00:05:32,333",
            "start_time": "00:05:06,833"
        }
    ]
}

Response#2

{
    "Action": [
        {
            "end_time": "00:07:25,207",
            "start_time": "00:07:10,083"
        }
    ],
    "Comedy": [
        {
            "end_time": "00:07:25,207",
            "start_time": "00:07:10,083"
        },
        {
            "end_time": "00:09:00,540",
            "start_time": "00:07:52,000"
        },
        {
            "end_time": "00:10:18,457",
            "start_time": "00:09:22,000"
        }
    ],
    "Debate": [
        {
            "end_time": "00:08:43,083",
            "start_time": "00:08:37,708"
        }
    ],
    "Emotion_Drama": [
        {
            "end_time": "00:07:25,207",
            "start_time": "00:07:10,083"
        },
        {
            "end_time": "00:08:43,083",
            "start_time": "00:08:37,708"
        }
    ],
    "Love": [
        {
            "end_time": "00:09:41,749",
            "start_time": "00:09:34,000"
        }
    ],
    "Thriller": [
        {
            "end_time": "00:09:21,582",
            "start_time": "00:09:18,541"
        },
        {
            "end_time": "00:09:27,874",
            "start_time": "00:09:26,625"
        }
    ]
}