OpenAI Deprecation Summary

Completions API, by January 4, 2024 (source)

Older model New model
ada ada-002
babbage babbage-002
curie curie-002
davinci davinci-002
davinci-instruct-beta gpt-3.5-turbo-instruct
curie-instruct-beta gpt-3.5-turbo-instruct
text-ada-001 gpt-3.5-turbo-instruct
text-babbage-001 gpt-3.5-turbo-instruct
text-curie-001 gpt-3.5-turbo-instruct
text-davinci-001 gpt-3.5-turbo-instruct
text-davinci-002 gpt-3.5-turbo-instruct
text-davinci-003 gpt-3.5-turbo-instruct

:information_source: Note: babbage-002 and davinci-002 are released. Other models coming later


Fine-tuned models based on older models, by January 4, 2024 (source)

Developers wishing to continue using their fine-tuned models beyond January 4, 2024 will need to fine-tune replacements atop the new base GPT-3 models (ada-002, babbage-002, curie-002, davinci-002).

:information_source: Note: babbage-002 and davinci-002 are released. Other models coming later

You can also also fine-tune against gpt-3.5-turbo, with `gpt-4 coming this fall.

We will be providing support to users who previously fine-tuned models to make this transition as smooth as possible. In the coming weeks, we will reach out to developers who have recently used these older models, and will provide more information once the new completion models are ready for early testing.


Embeddings, by January 4, 2024 (source)

All usages need to update to text-embedding-ada-002

We will cover the financial cost of users re-embedding content with these new models. We will be in touch with impacted users over the coming days.


Edits API, by January 4, 2024 (source)

Users of the Edits API and its associated models (e.g., text-davinci-edit-001 or code-davinci-edit-001) will need to migrate to GPT-3.5 Turbo by January 4, 2024.


Chat Completion Snapshot Models, by June 13, 2024 (source)

Older model New model
gpt-3.5-turbo-0301 gpt-3.5-turbo / gpt-3.5-turbo-0613
gpt-4-0314 gpt-4 / gpt-4-0613
gpt-4-32k-0314 gpt-4-32k / gpt-4-32k-0613
8 Likes

I made a post about how I’m feeling about this and also asking others what they are thinking about it, the biggest take away from my opinions there was this: “I was hoping instead of retiring models, we could have the price per token to be reduced so more ambitious projects with monetary constrains could be experimented.”.

1 Like

If you look at the reasoning it becomes clear why it is being done this way, it’s down to compute resources, now while it would be great to have the spare capacity to have a legacy model contingency in place, that hardware is better utilised on newer more capable (and more heavily used) models.

1 Like

I would agree with what @Foxalabs said and add that supporting many different services (model versions) is an expensive proposition manpower wise. My bigger disappointment is that they’re getting rid of the Text Completions API and therefore text-davinci-003. :frowning: There are a lot of language tasks that davinci performs way better then turbo and faster then gpt-4. It’s like they’re killing their middle child :frowning:

1 Like

They have mentioned training a replacement for text-davinchi in the form of a completions version of 3.5-Turbo, how well it performs will be the key I think, your use case would make you a good alpha candidate for it.

1 Like

Yeah I was just reading up on this new instruct version of turbo. I’ll be curious to see how well it performs. I got a message yesterday saying i’m really close to Trust Level 3 status so hopefully soon I’ll get access to a lot more models :slight_smile:

1 Like

Come join the club! it’s a fun hangout with extra stuff!

2 Likes

I need to read like 4,000 more posts lol

3 Likes

I’m always worried that davinci will get deprecated for all the fancy new chat endpoints and I’m glad this isn’t the case.

1 Like