How to set temperature and other sampling parameters of model in Open AI Assistant api?

Our use case, of creating deterministic text also requires the ability to adjust temperature. For now, we will move away from Assistants and over to the chat completion API.

hello sir,
I am currently using assistant api with retrieval tools there I am providing file with user message. I have a query regarding token count as in response from api token count was around 20k-30k and user message has around 200 token only so what is the internal mechanism so that they are getting increased.

I love the Assistant Api; thanks for this great feature.

We very much also need to make the answers more deterministic. We are providing suggestions to data analysis and it is not great to get very different answers to the same question.

Any update to when these settings will be available?

I, also, need to make the answers more deterministic. My use case: Function calling (tools). When prompting the assistant to call a function, it currently will inject AI fabricated arguments, when I’ve clearly defined all the names, types, descriptions and instructions. I would love to be able to set the temperature and top_p.

1 Like