Making Completion Responses Longer

Anyone experienced with making completion responses longer? Increasing max_tokens doesn’t do the trick for me (as this is an upper bound not guaranteed to be reached) and as far as I can tell there is no way of explicitly telling the model to produce an output with a certain (minimum) length. I have tried to mention the desired word count in the prompt (I am using the “davinci-instruct-beta” engine), but that didn’t help either. I have also tried a two-stage approach, where I make a second call with the prompt consisting of the prompt of the first call together with the corresponding response, with otherwise the same parameters, but this produces an empty string. Any ideas or tips are very welcome!

2 Likes

The key is to use qualitative language. Including instructions like “provide a very detailed explanation” will increase the verbosity of output.

4 Likes

Yep, the AI is very understanding, just ask nicely hahaha :DD

1 Like

Thanks a lot. Looks like this is the way to go. I need to improve my prompt engineering then.

1 Like