Response of is not same as response of AzureChatOpenAI?

When utilizing the same prompt to answer True/False for a given summary with both ChatGPT and Azure Chat OpenAI API, I frequently receive conflicting responses. What steps should be taken to ensure consistent answers same as ChatGPT?

1 Like

Welcome to the Community!

Could you provide us some of the prompts with explicit examples of the responses?

We don’t handle anything in Microsoft Azure’s domain, as their tools, access, resources, etc. are handled differently (and this is an OpenAI forum, not an Azure one). Therefore, while we can try to help in the best way we can, we’re probably going to be limited, because we can’t even verify if the models you’re interacting with are even the same models. OpenAI’s API and Azure’s might be inferencing different models altogether, hence why you’re seeing different results, and we would have no way of knowing that.

I’d recommend asking Azure for help on this. They would be able to tell you the answers to those questions. We can’t.

1 Like

Could you post a minimal test case showing the problem, like in Python against the Completions API, etc with a unit test against the prompt code showing the flip flop when called in a loop?

You may be able to modfy the prompt or API query parameters to get greter consistency .