How to set temperature and other sampling parameters of model in Open AI Assistant api?

Thanks for posting and welcome to the forum!

We do not currently support setting temperature or other completions parameters in the Assistants API, but it is something we’re seeking feedback on during the beta period.

Can you share more about your use case for modifying these parameters?

Assistant Runs currently sample multiple messages in a loop, and results can be choppy if we apply high or low temperatures to every message we sample.