Using a gpt-4 model not for chat purposes, similar to using gpt-3.5-turbo-instruct

i am not sure if this is a feature request or a problem I’m facing:
i have a functioning code that uses the default ‘gpt-3.5-turbo-instruct’ model and i want to replace it with a gpt-4 model, but when i simply switch the name the returned object is a OpenAIChat object, which is not desirable.

from langchain.llms import OpenAI
llm = OpenAI()
llm.model_name
'gpt-3.5-turbo-instruct'
type(llm)
<class 'langchain_community.llms.openai.OpenAI'>

when i try to specify GPT-4

llm = OpenAI(model_name='gpt-4')
UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain_community.chat_models import ChatOpenAI`
  warnings.warn(```

Yep, unfortunately gpt-4 is not available as a completion model. you have to use a chat api. There’s no way around it.

The good news is that in most cases, the prompt can often be easily adapted to achieve the results you’re looking for. If you want, we can take a look.

But yeah, it’s really unfortunate that completions is going by the wayside…

1 Like

Hi there!

It’s perfectly fine and common to use GPT-4 for tasks other than chat but you still have to use the chat completions endpoint and will get a chat object as output.

GPT-4 cannot be use with the completions endpoint that you’d use for the instruct model.

EDITED:
@Diet - sorry, looks like I was typing while you already responded!

2 Likes