In OpenAI Studio deployment with openai version 0.28.1, the 'engine' parameter, not the 'model' parameter, determines the model used for completion, that might lead to unexpected higher costs

Hello everyone,

I wanted to share an observation regarding calling the OpenAI API within an Azure environment. Regardless of the value assigned to the ‘model’ parameter when using the openai.ChatCompletion.create function with version 0.28.1 of the openai library, the model actually utilized is determined by a separate parameter set during OpenAI studio deployment, referred to as the “engine”.

It seems that in this version of the library, the ‘model’ parameter does not directly influence the model used for completion. Instead, it is the ‘engine’ parameter that dictates which model is utilized.

I noticed this inconsistency when I realized that the cost aligns more closely with the pricing for GPT-4 rather than the pricing for GPT-3.5, which is significantly higher.

If anyone else has encountered or observed similar behavior or has insights into this mechanism, I’d be interested in hearing about your experiences or any additional information you may have.

Thank you for your attention.