Is the seed parameter getting deprecated?

I see there is no way I can set the seed parameter in Response. Similarly For ChatCompletions, they do have a seed parameter but it is deprecated. Does this mean there is no way to have “some” determinism on the output. I am aware that even when the seed parameter was supported, there was no guarantee for 100% determinism.

Is there any alternative to seed? Would love to hear from an OpenAI engineer on this topic.

Seed, if it came out when gpt-3 was in effect or on completions, would allow you to use higher (default) temperature and have repeatable outputs for identical inputs. When the models themselves produce different token prediction values every run, it is quite pointless, and was delivered on DevDay 2023 as a false promise along with ‘fingerprint’, exposed as a waste of time shortly after.

The 'best" is top_p of 1e-6, as this makes any conceivable distribution only output the top rank of a run. Larger models generally will have lower perplexity, but this depends on the task and post-training - whether they can be monotonous and overfitted.

1 Like