Will the Completion Endpoint be dropped?

The description says:

The model “gpt-3.5-turbo-1106” is part of the GPT-3.5 family and is considered the most capable and cost-effective model in this family. It has been optimized for chat using the Chat Completions API but also performs well for traditional completions tasks. This optimization leads to lower cost and improved performance compared to other GPT-3.5 models

However calling the Completion Endpoint with: “gpt-3.5-turbo-1106”
leads to this message.

[11:59:21] {
“error”: {
“message”: “This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?”,
“type”: “invalid_request_error”,
“param”: “model”,
“code”: null
[11:59:21] -----------------

your code please, guessing is not fun :slight_smile:

I have now tested that these models below work with the “Chat Completion Endpoint”. AT LEAST “gpt-3.5-turbo-1106” ONLY works with the Chat-Completion Endpoint and threws an Error using the “Completion Endpoint”.

Since that is the Case, i assume that the “Completion Endpoint” is legacy and we can further Ignore it. I plan to remove all support for the Completion Endpoint. And only use the Chat-Completion Endpoint.

PS: My code will not help you much :slight_smile: as its Powerbasic.
But take a look on the comments then you see which “Model” is given and which modell is used. Which is also interesting.

CASE 1: W01=“gpt-3.5-turbo-1106”
CASE 2: W01=“gpt-3.5-turbo” ’ Calls “gpt-3.5-turbo-0613”
CASE 3: W01=“gpt-3.5-turbo-16k” ’ Calls "“gpt-3.5-turbo-16k-0613”
CASE 4: W01=“gpt-4-1106-preview” ’ 128 K Context?
CASE 5: W01=“gpt-4” ’ Calls “gpt-4-0613”
CASE 6: W01=“gpt-4-vision-preview” ’ Vision Preview
’ Set default Model

I don’t think they will drop the completion model. The gpt-3.5-turbo-instruct is supposed to be a long term supported model. I believe there are a lot of applications reply on the completion api.

You mean the
“gpt-3.5-turbo-instruct” ’ Temporary Model 16K
is not temporary?
Then we have just one Model that is for the Completion Endpoint?
I will do later more test with the models and see the response from the server.
Using some models i get a response from the server that they will be dropped a January 4.

The 3.5 instruct isn’t a temp model. It’s for replacing all the old completion models.


The initial “function calling and deprecations” blog article referred to the endpoint itself as also being retired with the retirement of older models from 2022.

However that and other documentation has been clarified.

Models such as davinci-002, babbage-002, and gpt-3.5-turbo-instruct will remain on the completion endpoint with no retirement date yet on the horizon for those.

1 Like

You might also want to bookmark model endpoint compatibility page…


OK, thanks for the valuable Information.
I have now delayed my own further tests until I know if my current “Open AI” Abonement will end up with Microsoft or with Anthropiq.

And then if they really plan to :slight_smile: “keep the Completion Endpoint”.

imho this is horrible for Production.

You should almost certainly stick to the standard model names (gpt-3.5-turbo, gpt-4 etc.) and not deploy with hard coded values for “dated” preview models.

Just make sure you are testing previews in staging/development before the switchover happens.

Yes, definitely, but do not you look at the names behind the comment sign.
They are commented out and not executing code.
They show the response from the server, which model was used.

1 Like

Decide yourself and read the Manual for my Software.