Why does the exact same prompt requests to text-davinci-003 alternate between long (30 words) and short (2-word) responses?

Has this happened to anyone else? I’ve been clicking send on the exact same request (same prompt, same model, same max tokens) to https://api.openai.com/v1/completions (text-davinci-003), but the responses consistently alternates between long (30 words) and short (2-word) responses. Even after waiting a few minutes between requests, the responses are alternating. Is there a way to always receive the long response?

Here is the request sent:

{
    "model": "text-davinci-003",
    "max_tokens": 2048,
    "prompt": "Järvenpää Astec hotellin pihaalue on auraamatta. Täällä on varmaan 30 cm lunta tiellä. Tätä ei ole päiviin auratta, kun lunta on paljon tullut.\n\nKirjoita vastaus reklamaatioon, laita loppuun.\n\nTerveisin,\n\nTimo / Astec"
}

Here is the long response:

{
    "id": "cmpl-6mOC4bl0YHoo5WbhcU0mFdEXFsllG",
    "object": "text_completion",
    "created": 1676990764,
    "model": "text-davinci-003",
    "choices": [
        {
            "text": " Järvenpää\n\nKiitos reklamaatiostasi. Hotellin pihaalue on aurattu säännöllisesti viime aikoina, vaikka lunta on paljon tullut. Yritämme parhaamme mukaan pitää alueen aina siistinä ja jälleen aurata, mikäli lunta on paljon. Kiitos huomioistasi.",
            "index": 0,
            "logprobs": null,
            "finish_reason": "stop"
        }
    ],
    "usage": {
        "prompt_tokens": 100,
        "completion_tokens": 110,
        "total_tokens": 210
    }
}

Here is the short response:

{
    "id": "cmpl-6mOR7GXhtdGgbzx3AdD9XRavXbn6I",
    "object": "text_completion",
    "created": 1676991697,
    "model": "text-davinci-003",
    "choices": [
        {
            "text": " Hotelli Järvenpää",
            "index": 0,
            "logprobs": null,
            "finish_reason": "stop"
        }
    ],
    "usage": {
        "prompt_tokens": 100,
        "completion_tokens": 9,
        "total_tokens": 109
    }
}

Moro.

Haiskahtaa suomalaiselta, joten vastaan suomeksi.

Sun kannattaa liittää rekkuliin hieman apuja mukaan. Esim.

[prompt]
Hotelliimme on lähetetty seuraava reklamaatio.
Reklamaatio: “teksti tänne”
Luo vastaus jossa pahoitellaan tilannetta ja luvataan korjata asia mikäli mahdollista.
[/prompt]

On myös hyödyksi, jos laitat kontekstiksi yhden hyvän esimerkin miten vastata. Esimerkiksi tuon minkä jo tänne postasit.

1 Like

This may sound weird but do you urlencode your request? I experienced the exact same pattern as you which was solved by removing the URLencode(prompt) I was using in .net and manually handling the characters that could break the request.

1 Like

No, there’s no encoding or decoding whatsover. That does sound weird. But I am finding out that it can be quite nitpicky and getting the right responses really does require some fiddling. Before this, I was having issues with line breaks returning blank responses. ¯\_ (ツ)_/¯

But this new prompt as suggested by krisu seems to work, which is basically to state the request clearly and put things into quotes.

{
    "model": "text-davinci-003",
    "max_tokens": 2048,
    "prompt": "Hotelliimme on lähetetty seuraava reklamaatio. \"Järvenpää Feronom hotellin pihaalue on auraamatta. Täällä on varmaan 30 cm lunta tiellä. Tätä ei ole päiviin auratta, kun lunta on paljon tullut.\"\n\nLuo vastaus jossa pahoitellaan tilannetta ja luvataan korjata asia mikäli mahdollista."
}
1 Like

Ok, that makes sense. I also experienced it in a combination with quite a long prompt with extensive instructions - and I also tried the \n returning empty prompts which is why I resorted to urlencode, and ended up doing some manual replaces instead.

My prompts are often in Danish and I noticed that yours are in Finish. Have you experienced a lack of quality when requesting longer responses?

I use embeddings to match relevant pieces of text which then becomes a part of the prompt. If I ask for a response that is longer than the embedded text it starts writing in what looks to be a combination of Swedish and Norwegian at the end of my response :sweat_smile:

1 Like

I learned when trying prompts “Translate from fr to en: \nblah\nmore” that in json, the \n character needs to have a space before and after for the return text to be a complete translation of the text. That may not be your problem, but just to say.

2 Likes