When can we expect to see the seed feature back as a model setting for GPT-4.1 API use? I am looking to decrease variability between API runs, currently, I use detailed prompt engineering, temperature, top_p, and identical inputs between runs.
You can still use seed parameter for GPT 4.1 and other models. It’s just that you are restricted to using the Chat Completions API. You can check which models are supported by the Chat Completions endpoint here.
4 Likes
Ah ok, that makes sense. I am using the agents SDK, passing the seed through the extra_args
from ModelSettings
, which is built on the responses API.
Thanks for the information.
I see that the seed parameter was a Beta, do we know if this is something that will be added into the responses API?