Has this happened to anyone else? I’ve been clicking send on the exact same request (same prompt, same model, same max tokens) to https://api.openai.com/v1/completions (text-davinci-003), but the responses consistently alternates between long (30 words) and short (2-word) responses. Even after waiting a few minutes between requests, the responses are alternating. Is there a way to always receive the long response?
Here is the request sent:
{
"model": "text-davinci-003",
"max_tokens": 2048,
"prompt": "Järvenpää Astec hotellin pihaalue on auraamatta. Täällä on varmaan 30 cm lunta tiellä. Tätä ei ole päiviin auratta, kun lunta on paljon tullut.\n\nKirjoita vastaus reklamaatioon, laita loppuun.\n\nTerveisin,\n\nTimo / Astec"
}
Here is the long response:
{
"id": "cmpl-6mOC4bl0YHoo5WbhcU0mFdEXFsllG",
"object": "text_completion",
"created": 1676990764,
"model": "text-davinci-003",
"choices": [
{
"text": " Järvenpää\n\nKiitos reklamaatiostasi. Hotellin pihaalue on aurattu säännöllisesti viime aikoina, vaikka lunta on paljon tullut. Yritämme parhaamme mukaan pitää alueen aina siistinä ja jälleen aurata, mikäli lunta on paljon. Kiitos huomioistasi.",
"index": 0,
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 100,
"completion_tokens": 110,
"total_tokens": 210
}
}
Here is the short response:
{
"id": "cmpl-6mOR7GXhtdGgbzx3AdD9XRavXbn6I",
"object": "text_completion",
"created": 1676991697,
"model": "text-davinci-003",
"choices": [
{
"text": " Hotelli Järvenpää",
"index": 0,
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 100,
"completion_tokens": 9,
"total_tokens": 109
}
}