Prevent max_tokens to cut off text

Is there any way to prevent cutting off the result? I need to use max_tokens but I can’t find a solution in order to prevent this issue.
Here is my body:
{
“model”: “text-davinci-002”,
“prompt”: "Generate business name description\nBusiness type: bakery\nSpecialization: specialise in ice cream cakes \nOwner:Tom\n ",
“temperature”: 1,
“n”: 3,
“max_tokens”: 20,
“user”: “”
}

Here is one of the results:
{
“text”: “\n\nTom’s Bakery is a specialised bakery that makes ice cream cakes. Owner Tom is”,
“index”: 2,
“logprobs”: null,
“finish_reason”: “length”
}

Did someone managed to fix this issue? A working example would be awesome.
Thanks in advance!

2 Likes