I’m pretty concerned when I read this in the latest blog announcement:
Moving from text completions to chat completions
We introduced the in March, and it now accounts for 97% of our API GPT usage.
The initial Completions API was introduced in June 2020 to provide a freeform text prompt for interacting with our language models. We’ve since learned that we can often provide better results with a more structured prompt interface. The chat-based paradigm has proven to be powerful, handling the vast majority of previous use cases and new conversational needs, while providing higher flexibility and specificity. In particular, the Chat Completions API’s structured interface (e.g., system messages, function calling) and multi-turn conversation capabilities enable developers to build conversational experiences and a broad range of completion tasks. It also helps lower the risk of prompt injection attacks, since user-provided content can be structurally separated from instructions.
There will be a trained model to replace completions, looks like this will be of interest to you
gpt-3.5-turbo-instruct is an InstructGPT-style model, trained similarly to text-davinci-003 . This new model is a drop-in replacement in the Completions API and will be available in the coming weeks for early testing.
You can pass the prompt to be completed in a message object with "role":"assistant" and it will take it from there to complete the prompt with a response with "role":"assistant" similar to how completions endpoint used to.
Small hiccup to be worked out: the chat AI currently has to “complete” its way past an <|im_end|> and <|im_start|> token and its undocumented delineation that is passed to the context, along with unseen “assistant” and carriage returns.
Completion instead thus becomes instruction-following and fine-tune following.
Hey @u1i the intent of this post is to make clear that we actually don’t think this is the cases, Chat is intended to be general purpose and do the same stuff as completions. It is not 100% there right now but with further steer-ability improvements and reduced “chattiness” we are going to ideally have something that is better than completions is today.
What exactly do they mean when they say the new model will be a drop-in replacement in the Completions API? Also, will the outputs from that new model be just as good as those from the text-davinci-003 model right now?
You are talking to your fellow forum user, not an employee of OpenAI, even if they may use deceptive language.
You seem to be describing the “playground” which isn’t really an end product, but rather a sales and experimentation tool. It right now doesn’t even count completion model input tokens correctly, so seems kind of ‘back burner’.
If the new completion model is written so that developer applications can transition seamlessly, then it can be reasoned that the playground legacy “box” can also just be pointed at the replacement models.
You can’t just replace text completions with gpt 3.5 turbo. text-davinci-003 performs so much better, unlike gpt 3.5 which needs long system prompts. The main point of OpenAI is that text completions were not as popular as chat completions, and i could see why. But text completion models and chat completion models are just diffrent things, they are not interchangeable at all. GPT 3.5 turbo is just not good enough,
Starting January 4, 2024, older completion models will no longer be available, and will be replaced with the following models: gpt-3.5-turbo-instruct
The new instruct models will be tuned to hopefully perform better than the completion models they replace.
Developers using other older completion models (such as text-davinci-003 ) will need to manually upgrade their integration by January 4, 2024 by specifying gpt-3.5-turbo-instruct in the “model” parameter of their API requests. gpt-3.5-turbo-instruct is an InstructGPT-style model, trained similarly to text-davinci-003 . This new model is a drop-in replacement in the Completions API and will be available in the coming weeks for early testing.