How to get constant result for particular prompt when we are using openAI API to get result for prompt?

I am using openAI API and using GPT3.5 turbo model and using chat completion API to get result for query , I have some large prompt to create some document , I am getting different result every time when I run the API with same prompt

Is there any parameter in API to get constant result ?

Hi and welcome to the Developer Forum!

You can turn the temperature to 0 to reduce the amount of variation but it will not be nothing, AI generation cannot be assured to be the same every time.

Is there any parameter in OpenAI API similar to random state in ML model development?

Not for the language models, no. Dall-E can use a seed value, but there is no equivalent for the LLM API’s