How to set temperature and other sampling parameters of model in Open AI Assistant api?

So it looks like temperature can be used? So for an assistant I want to use with code interpreter, I should set temperature lower? Something like 0.2? And is it a good idea to use the gpt-4o with the assistants api?

OpenAI heard the appeal, and both temperature and top_p can be used.

I would start with top_p as what you specify, with 0.2 keeping the AI to producing only highly-certain tokens as output.

gpt-4o is frustratingly inadequate in so many ways. You can use it directly or in assistants, but when it sends code that produces an error internally and has to retry, it is you that pays double.