PLEASE don't remove davinci-003 it performs better as any chatGPT model for articles!

I hope there is a way to keep at least the text-davinci-003 model available in the future!

It works so much better than GPT-3 or 4 to follow specific linguistic instructions to create articles!
The GPT models will just ignore many of the instructions. I’ve compared with many prompts and thousands of articles.

The same prompts generated with davinci-003 even pass AI content detectors while the same prompt used with any GPT model won’t, but that is not the point to keep it around, just another point.

The quality of generated text from davinci is a lot better and higher quality than any of the chatGPT models produce.

So please OpenAI, save davinci-003 and keep it in the API :slight_smile:


Yes, I love the “text-davanci-003” model! The OpenAI playground is so fun and I use it so much. And purely for fun, I’m actually not a developer or anything lol.

I saw in OpenAI’s most recent blog post that the base “davinci” model will be replaced with a new model called “davinci-002”. (I wonder why they’re not making it 003 but oh well) I’m hoping the text that new model will generate will be just as good as what “text-davinci-003” generates, otherwise it won’t be as fun anymore, ya know.

1 Like

Have you tried GPT-4? Honestly, it’s hard to go back to text-davinci-003 now. But wondering what your experience has been.

P.S. Granted davinci-003 is like an “uncensored” model, without the apologies, but other than that, the quality isn’t as good as GPT-4 IMO.


Huh? how come when I deleted the post it puts post deleted by author instead of just going away?

I do not agree that text-davinci-003 is better than gpt-4, but it is definitely, and i mean DEFINITELY better than gpt-3.5-turbo. gpt-3.5-turbo also has a bad habit of needing long system prompts and it us just not as smart as text-davinci-003.

Shame they’re discontinuing text-davinci. The modal is far better than the chat modals (inc. gpt4) at extracting data into JSON objects. DaVinci also has a better understanding of the data while returning a complete response (avg.) 3 seconds faster vs. gpt3.5 via API.