"Moving from text completions to chat completions"

I’m pretty concerned when I read this in the latest blog announcement:


Moving from text completions to chat completions

We introduced the in March, and it now accounts for 97% of our API GPT usage.

The initial Completions API was introduced in June 2020 to provide a freeform text prompt for interacting with our language models. We’ve since learned that we can often provide better results with a more structured prompt interface. The chat-based paradigm has proven to be powerful, handling the vast majority of previous use cases and new conversational needs, while providing higher flexibility and specificity. In particular, the Chat Completions API’s structured interface (e.g., system messages, function calling) and multi-turn conversation capabilities enable developers to build conversational experiences and a broad range of completion tasks. It also helps lower the risk of prompt injection attacks, since user-provided content can be structurally separated from instructions.

(ref)


the use cases for completion and chat are different.

How can I achieve completion in the API, in the playground when using chat endpoints?

2 Likes

There will be a trained model to replace completions, looks like this will be of interest to you

gpt-3.5-turbo-instruct is an InstructGPT-style model, trained similarly to text-davinci-003 . This new model is a drop-in replacement in the Completions API and will be available in the coming weeks for early testing.

2 Likes

Hi @u1i

You can pass the prompt to be completed in a message object with "role":"assistant" and it will take it from there to complete the prompt with a response with "role":"assistant" similar to how completions endpoint used to.

1 Like

Small hiccup to be worked out: the chat AI currently has to “complete” its way past an <|im_end|> and <|im_start|> token and its undocumented delineation that is passed to the context, along with unseen “assistant” and carriage returns.

Completion instead thus becomes instruction-following and fine-tune following.

1 Like

Hey @u1i the intent of this post is to make clear that we actually don’t think this is the cases, Chat is intended to be general purpose and do the same stuff as completions. It is not 100% there right now but with further steer-ability improvements and reduced “chattiness” we are going to ideally have something that is better than completions is today.

8 Likes

Chat/GPT-4 and Completions/GPT-3-davinci behave totally differently, and it’s good that way.

That’s why I use both, or combine both to get the best output. Think of left side of the brain and right side.

What exactly do they mean when they say the new model will be a drop-in replacement in the Completions API? Also, will the outputs from that new model be just as good as those from the text-davinci-003 model right now?

As Logan has posted a couple of posts down from mine, they are building a model that they hope will perform better than the previous completions models,.

I see. When using the new model, are you going to have the Chat Mode interface, or will you have the Complete Mode looking interface with just the basic text box?

You are talking to your fellow forum user, not an employee of OpenAI, even if they may use deceptive language.

You seem to be describing the “playground” which isn’t really an end product, but rather a sales and experimentation tool. It right now doesn’t even count completion model input tokens correctly, so seems kind of ‘back burner’.

If the new completion model is written so that developer applications can transition seamlessly, then it can be reasoned that the playground legacy “box” can also just be pointed at the replacement models.

1 Like

I do actually use the ‘Playground’ a lot, but my main point is: GPT-4 is incredible. The chat API endpoints seem to be the only way to consume it.

GPT-3 is entirely different and needs to stay :slight_smile:

If you set the “type” of current message from “user” to “assistant” as input you get a more completions style output as it is right now, worth experimenting with.

2 Likes

You can’t just replace text completions with gpt 3.5 turbo. text-davinci-003 performs so much better, unlike gpt 3.5 which needs long system prompts. The main point of OpenAI is that text completions were not as popular as chat completions, and i could see why. But text completion models and chat completion models are just diffrent things, they are not interchangeable at all. GPT 3.5 turbo is just not good enough,

1 Like

Welcome to the forum!

I do not believe it is the intention to move developers from a completions model to a chat model, if you note this section here GPT-4 API general availability and deprecation of older models in the Completions API and read this section,

Starting January 4, 2024, older completion models will no longer be available, and will be replaced with the following models: gpt-3.5-turbo-instruct

The new instruct models will be tuned to hopefully perform better than the completion models they replace.

Developers using other older completion models (such as text-davinci-003 ) will need to manually upgrade their integration by January 4, 2024 by specifying gpt-3.5-turbo-instruct in the “model” parameter of their API requests. gpt-3.5-turbo-instruct is an InstructGPT-style model, trained similarly to text-davinci-003 . This new model is a drop-in replacement in the Completions API and will be available in the coming weeks for early testing.

Hopefully this addresses your comment.

The new instruct models will be tuned to hopefully perform better than the completion models they replace.

We will see, we will see…