Use gpt-4o-mini ChatOpenAI, but gpt-3.5-turbo-0125 used

Hi:

   I am trying to use 'gpt-4o-mini' in ChatOpenAI, code like below:
from langchain_openai import ChatOpenAI
       OPENAI_MODEL_4oMini = "gpt-4o-mini"
      chatmodel = ChatOpenAI(model=OPENAI_MODEL_4oMini, temperature=0, max_tokens=500)
   The api called successfully, but when I review openAI response:

response_metadata={‘token_usage’: …, ‘model_name’: ‘gpt-3.5-turbo-0125’, }

It shows the model_name is ‘gpt-3.5-turbo-0125’, but I pass ‘gpt-4o-mini’, why it use gpt3.5 ?

Sounds like a langchain issue.

I would speculate it is defaulting to 3.5 when an unknown model is called and the library hasn’t been updated to reflect the existence of the new model.

1 Like

I am not sure the reason, I used the latest langchain(0.2.10), I think if the input model is unknown, the langchain should throw an exception instead of using a different model, which is tricky.
Do you know how I can submit a ticket also in langchain? in its github project (Issues · langchain-ai/langchain · GitHub )? Thank you.

It turns out that it is my fault, I put the invoke in another function, but in that function, I use default ChatOpenAI which uses gpt-3.5 as default.
Thank you @anon22939549

1 Like