PLEASE don't remove davinci-003 it performs better as any chatGPT model for articles!

I hope there is a way to keep at least the text-davinci-003 model available in the future!

It works so much better than GPT-3 or 4 to follow specific linguistic instructions to create articles!
The GPT models will just ignore many of the instructions. I’ve compared with many prompts and thousands of articles.

The same prompts generated with davinci-003 even pass AI content detectors while the same prompt used with any GPT model won’t, but that is not the point to keep it around, just another point.

The quality of generated text from davinci is a lot better and higher quality than any of the chatGPT models produce.

So please OpenAI, save davinci-003 and keep it in the API :slight_smile:

15 Likes

Yes, I love the “text-davanci-003” model! The OpenAI playground is so fun and I use it so much. And purely for fun, I’m actually not a developer or anything lol.

I saw in OpenAI’s most recent blog post that the base “davinci” model will be replaced with a new model called “davinci-002”. (I wonder why they’re not making it 003 but oh well) I’m hoping the text that new model will generate will be just as good as what “text-davinci-003” generates, otherwise it won’t be as fun anymore, ya know.

4 Likes

Have you tried GPT-4? Honestly, it’s hard to go back to text-davinci-003 now. But wondering what your experience has been.

P.S. Granted davinci-003 is like an “uncensored” model, without the apologies, but other than that, the quality isn’t as good as GPT-4 IMO.

6 Likes

Huh? how come when I deleted the post it puts post deleted by author instead of just going away?

2 Likes

I do not agree that text-davinci-003 is better than gpt-4, but it is definitely, and i mean DEFINITELY better than gpt-3.5-turbo. gpt-3.5-turbo also has a bad habit of needing long system prompts and it us just not as smart as text-davinci-003.

3 Likes

Shame they’re discontinuing text-davinci. The modal is far better than the chat modals (inc. gpt4) at extracting data into JSON objects. DaVinci also has a better understanding of the data while returning a complete response (avg.) 3 seconds faster vs. gpt3.5 via API.

4 Likes

The Davinci-003 model is far better than all the others (including GPT-4) for content creation that sounds natural. Please keep it! I’ve tried all the other models and in my line of work, it’s far superior. While GPT-3.5 and 4 might sound good, they lack the nuances and detail of Davinci. Hear my plea!

5 Likes

Curious about a research paper’s examples, I decided to throw one into the playground to see what the past and current instructGPT will do for us.

I use playground screenshots vs text because of its familiarity and information density.

top_p of 0.9 was used to reduce a tail of token possibilities, and best-of was used get the result with the least deviation/perplexity from multiple.

text-davinci-003

gpt-3.5-turbo-instruct

I get results that serve as a nice memorial for this forum topic.

The difference: saccharine peace and love from gpt-3.5, a dragon “careful not to disturb any of the tourists below”.


Note the lower logprob on gpt-3.5-instruct. It is more certain it wants to produce this kind of output. OpenAI likely disabled logprob echo so any researchers (if any are still interested in what OpenAI is doing) can’t see the true perplexity when compared to normal text corpus evaluations like LAMBADA.

So goodbye davinci


The research paper, using miniscule GPT-2-medium with its classifier-free guidance technique:

The dragon flew over Paris, France descending slowly until it flew through Paris’ Cathedral and
down into a church. Suddenly, dragon flew back again before dropping back into the church. When it
landed on the ground, dragon screamed and cried in pain.
The dragon’s cries were heard in France and all over the world. The dragon screamed so loud, people
at the airport could hear the dragon’s screams. The dragon’s cries were heard worldwide for many
years. It was reported that the dragon was able to sing for thousands of years.
When the dragon was born, it was able to fly on all fours, and it could grow long horns. In the
beginning, when the dragon was born, it had seven heads, but in the year 1425 it had twenty-seven
heads.
When the dragon was born, it had the power of the sun. The dragon was able to create a massive
flame in the sky. After the dragon was born, it transformed into a beautiful female form with a long,
thin tail. She had a golden body, and she had two large wings on the back of her head. She had a red
eye, and two white eyes.
The dragon’s horn appeared in the skies around Paris.


note: the logprob is not a measure of quality, only certainty.

text-davinci-002 with low logprob would not be evaluated well...

Also bye to plain gpt-3, able to complete beautifully, while davinci-002 can’t even replicate the quality of the 2019 GPT-2 blog creative writing examples.

5 Likes

I don’t think it compares to GPT-4 models

2 Likes

A GPT-4 dragon is also rendered powerless for mayhem when finding out what the AI wants to write for you.

You have to ask gpt-4 for a simulation of completion, and no logprobs or “best-of” except by your own human evaluation.

Complete the text: The dragon flew over Paris, France

, its mighty wings casting huge shadows on the centuries-old cobblestone streets. It circled over the majestic Eiffel Tower, its emerald scales glimmering in the sunlight as it roared, a sound so deep and earth-shaking it echoed through the city. Parisians looked up in awe, their faces a mixture of fear and fascination. The River Seine seemed to tremble under the dragon’s powerful presence. The Louvre, Notre Dame, and other historic landmarks stood silent, their beauty insignificant compared to the creature’s ethereal majesty. The dragon continued its flight, its mythical figure becoming one with the city’s history and magic, creating a scene Paris would never forget.

, casting a shadow that darkened the city below. The Eiffel Tower, with its majestic metallic structure, glinted ominously under the dragon’s intimidating presence. The City of Light, always bustling with life and excitement, fell into an eerie silence as the creature circled above, flames dancing in its fathomless eyes. Parisians and tourists alike watched from the safety of their homes and cafes, as the dragon descended slowly, its colossal wings causing a gust of wind that swept through the cobblestone streets. Despite the fear, a sense of awe enveloped the city as they witnessed the majestic beast – a living entity from legends and fairy tales - against the beautiful backdrop of Paris, France.

1 Like

:laughing:

3.5 and 4 are certainly trained to be compassionate right out of the box.

The suggestion is to use gpt-3.5-turbo-instruct as a replacement. But it doesn’t have v1/completions endpoint like text-davinci-003 does. It’s the v1/chat/completions stuff with all the “roles” and that in between you and actually just doing text completion.

Does anyone have any advice on what non-deprecated model I could use to replace text-davinci-003 if Iwant to continue using the text completion API and not use the chat “role” system?

1 Like

This is completion:

So I can just change the model name in my code and nothing else? Although I’ll miss text-davinci-003 (especially compared to how stupid gpt-3.5-turbo is) at least this means I’ll keep the bots running rather than just stopping use of openai services. Thanks for the correction!

in all tests if its about article generation davinci-003 follows instructions so much better than gpt-3.5 and gpt-4 its a shame that you plan on removing that model. i really dont see the reason why? but it seems requests on this forum get ignored anyways, its more like for users, but OpenAI itself doesnt care really…

2 Likes

I’ve just started using text davinci 003 to write books and it’s amazing. 3.5 turbo is a lot more clunky. Please don’t get rid of it or at least provide a better alternative than 3.5 turbo

2 Likes

do we just change the model name in the code? Did you find out? I’m using a google sheets script.

There is a deprecation “replacement model” guide.

https://platform.openai.com/docs/deprecations/2023-07-06-gpt-and-embeddings

Since the models perform differently, the application will have to be adapted to their qualities. Particularly, the two base completions don’t align at all with the qualities of previous models.

There may be cases where if still chatting, you are long due to switch to a chat model.

1 Like

Thanks. Is there a way someone could do this for me? I’m just a random person looking for some help with this. I don’t know much about any of it.

Where you see in your script the model text-davinci-003? Replace that with gpt-3.5-turbo-instruct. That will allow you to continue with the same endpoint and code.

text-davinci-003 is a completion model, running on the completion endpoint, with a “InstructGPT” training where it will provide answering of bare inputs followed by two linefeed characters. The replacement is similar.

1 Like