Let's discuss our experiences transitioning from text-davinci-002 to text-davinci-003

Yesterday OpenAI released text-davinci-003. I am curious to hear about the experiences of everyone that has had a chance to work with the new model.

How have your interactions changed since moving to text-davinci-003?

Does it do everything better, or have some tasks become more difficult?


I haven’t had a chance to work with 003 yet but I do wish 002 was still available in the playground for troubleshooting my existing production prompts prior to 003 migration. I also didn’t see any mention, in the announcement regarding continuing to support 002, but I am assuming this is still accessible via the API.

002 is still available

1 Like

Click on “Show more models”.


Thanks for pointing this out. I overlooked that option.


It looks like text-davinci-003 is better at rhyming. Arstechnica put out an article showing some examples.

I’m super excited to put 003 to action. But just after a few tries I’m experiencing some difficulties.

I tested some of my 002 prompts with the new 003 in the playground editor, and it generates different kinds responses that are considerably worse for the use-case.

It’s probably about adapting how the prompt is built. In case there is a prompt engineer here that can point me in the right direction, that would be super appreciated :laughing: In my prompt I’m requesting a chat response for a conversation, together some background information on the person (like where they are from, what hobbies they have …) – in 002 this produces a natural conversation flow that consistent with the background information. In 003 every response is trying to summarize all the background information, like telling you the age and job even when you just asked for the name. This behavior existed in 002 too, but it was more rare. Now it’s seemingly every response. Is it possible that 003 has a tendency to produce longer responses?

// update: I managed to get 003 to produce shorter response by specifically asking for it in the prompt. That worked surprisingly well. 003 seems to be better at following more complex prompts. Very cool! :partying_face:

1 Like

Curious if we know which model fine-tuning uses if we specify davinci? Would it perhaps be 003 already?

For generating fiction and prose, 003 is way better. Previously, the only way to generate fiction was fine-tuning… The original Davinci would just kind of ramble on (the prose is good but with no concept of a story)… 002 would just generate a few sentences.

With 003, I can explicitly give it a paragraph range, and it will generate a story that fits exactly that number of paragraphs and will have a base narrative structure in most cases (not great writing, but will understand that it needs to conclude a short story with some sort of ending).

1 Like

My prompts are used by mainly French Speakers, and as much as in English 003 has been showing great promise, in French 003 is not doing so well. The generations start out nicely, but quickly get super scrambled.
In English 003 also writes much longer texts when asked to write “long”, and also scrambles then ends if too long.

Interesting that you mention french specifically. I just did some tests with multiple languages, French was performing particularly odd (as compared to English, German, Spanish & Swedish, which didn’t seem to have the same issues). But that might also be a problem with the exact prompt?

I generated several poems with davinci-002 and davinci-003. Of course it is difficult to say which kind of result in creating lyrics is better…

I found out that the poems with davinci-003 seem to rhyme in most of the times/lines. So in a classical way they are far better. The fact that you can choose between the models opens interesting possibilties in the way of creating this kind of texts.

BTW: I took personal descriptions from friends of mine to create poems about them. The were impressed about the poems and it seems, that the “not so perfect”-rhymes with davinci-002 had a more emotional touch in contrast to davinci-003 which is more perfect in its structure. But this might be a very unreliable finding because I tested it with only 8 different persons.

1 Like

Hello everyone. I am not an expert programmer. However ive been working on a project using davinci-003. My issue is that, while I am connecting and communicating with the model, I am not getting a completion. Not sure what is going on. Any advice or reference?

Here is what i have. Any help would be greatly appreciated.

// Call OpenAI API
const apiKey = document.getElementById(“apiKey”).value;

  .post("https://api.openai.com/v1/engines/text-davinci-003/completions", {
    prompt: prompt,
    temperature: 0.5,
    max_tokens: 3000,
    top_p: 1,
    frequency_penalty: 0.3,
    presence_penalty: 0.2,
    stop: ["\n", "###"],
  }, {
    headers: {
      Authorization: `Bearer ${apiKey}`,
      "Content-Type": "application/json",
  .then((response) => {
    // Open new window with lesson plan
    const lessonPlan = response.data.choices[0].text;
    const newWindow = window.open("");