How to get instruct series to limit the output length?

Using Davinci instruct.

It doesn’t seem to obey instructions very well, like “The paragraph should be 2 sentences long”…


The key is to use clever stops and structures. Often I use <<END>> and \n\n in combination. Remember, GPT-3 is a generator - it wants to generate text! So we have to use other methods to keep it on a tight leash.


Hi @daveshapautomator, can you please give a quick example? That would be great :slight_smile:

Let’s say I want it to generate 50 characters text only.


1 Like

If you want to just generate 50 characters then you should use the max token limit.

1 Like

@jefftay Assuming you’re using stop=['\n'], for instruct series, examples still help constrain behaviour further, so perhaps add one or two to steer the model better. It might also be the case that, given GPT3’s pre-training data, instances of ‘paragraphs’ are correlated with more than two sentences, so experimenting with phrasing with more precise semantics (ie “the answer should be brief, at most two sentences”) could improve results.

You could also decompose the task into two steps: first step would instruct to generate two bullet points, starting with “-” token, then have the next step merge and rephrase the first two. Lastly, brute-forcing by having a accept_condition with max_attempts to regenerate until the output is in the correct format would give stronger guarantees. All depends on latency requirements and compute budget for that task.

Finetuned models will have more mileage in terms of constraining model output overall though.

1 Like

Using Max token would just cutoff the text. I would like it to complete the sentence within that limit.


1 Like

Use descriptive language to describe the output you want, such as “write a concise reply”.

Yeah, I have been giving clear instructions. But it is just ignoring it. I thought there might be another trick that I am not aware of.

The following is my prompt
For the following text, write a concise headline of no more than 30 characters:
Create original content in a matter of seconds. Unlimited copy variations, so it’s easy for you to find the most suitable one for your needs. Create fresh blogs and social media content on-demand from RyterAI. Try it now.
Headline: Create fresh blogs and content from the convenience of your

Other Details-

"temperature": 0.87,
  "max_tokens": 10,
  "top_p": 1,
  "frequency_penalty": 0,
  "presence_penalty": 0

It doesn’t really understand counting. I would drop the use of specific numbers. The reason is that it just generates in one go. Humans work on things with multiple passes to ensure adherence to rules. For instance, you also won’t get iambic pentameter or haiku out of GPT-3 because it requires more planning and structure.


Yes, I have realised that. It makes more sense now :slight_smile:

@daveshapautomator, actually the Instruct models follow directions that you give it fairly precisely. I achieved fairly good results trying this technique out:

Asking Davinci Instruct to provide exactly 50 word summary of how macaroni was invented:

Total word count: 41
Max tokens allocated to it: 539 (Well over the 50 word limit)

Here’s another:

Provide a 70 word summary of how the French war happened:

Total word count: 65
Max tokens allocated to it: 703 (Well over the 70 word limit)

Important Note Regarding Using Numbers with GPT-3:

Okay so I just realized something that’s important to note here. While these models are good with following instructions, it seems like with higher word count limits, 100+ word count limits, the models actually don’t follow instructions and I think I know why.

I was able to achieve remarkable results with lower numbers but not with higher numbers. This was the same issue I encountered in my first post regarding whether GPT-3 is capable of being good a mathematics, where I learned that it performs well with algebra when using low numbers such as 33 or even 77 at times, but when tasked with bigger numbers exceeding 100, it started to actually perform worse and I believe it has something to do with how the models are trained on natural language which is different from the numerical form of numbers.

I discovered that the word form of numbers proved much more effective when communicating with these models, which I explain in great detail in the post mentioned above.

Failure point when tasked with providing exactly 177 words describing how advertisers lure people into buying their products:

Total word count: 89 (Well under the 177 word count)
Max tokens allocated to it: 1521

This is when I learned that converting numbers from the numerical form into word form proved way more effective, as seen in this next example:

Total word count: 194
Max tokens allocated to it: 1521 (Well over the 200 word limit)

If you’re seeking higher word count limits, try using the word form of the number and also just to help the Instruct models out, you can certainly emphasize that you’re wanting a specific word count in your completions!

Here’s a similar prompt where instead of using 200 words in word form, it’s in numerical form:

Total word count: 474 (Had to stop the model from generating more words)
Max tokens allocated to it: 1522

I also had a tough time getting Davinci Instruct to follow instructions when tasked with generating paragraphs with a certain amount of sentences. Usually I’m pretty good at prompt engineering to get desirable output quickly. I’m seeing that the Instruct models are quite sensitive to the starting text you use alongside your prompt. I found that after several tries, I got Davinci Instruct to almost achieve the 5-sentence long completion after I used ‘Short Report:’ as the starting text. However, I also instructed Davinci Instruct to provide no more than 5 sentences but then got 6. It does seem like prompt engineering as well as the starting text carry significantly higher weight when generating completions.

Hope this helps @jefftay and everyone else!!



Hi @DutytoDevelop, you are spot on when you mentioned that using words instead of numeric and having short text like ‘Short Report:’ at the start, helps a lot. Limiting the output to a few words work 90% of the time. I couldn’t get the 100% result yet. However, when it comes to limiting to characters - not words (my original problem), it doesn’t obey at all.

Example: Temprature:0

Provide exactly 30 characters that summarize how macaroni was invented:

Macaroni was invented in Italy in the 18th century. (51 characters)

Provide exactly 50 characters that summarize how macaroni was invented:

Macaroni was invented in Italy in the 18th century. (51 characters)

Provide exactly thirty characters that summarize how macaroni was invented:

Macaroni was invented in Italy in the 18th century. (51 characters)

Changed temperature to 0.8

Provide exactly thirty characters that summarize how macaroni was invented:

The chef of the Presidential Palace in France during the late 18th century, Nicolas Appert, was the first to create a way to preserve food so it could be consumed at a later date. To do this, he would seal food inside glass jars. At the time, the only food that could be preserved in this way was vegetables.

Later on, an Italian chef serving at French court, Alexandre Balthazar Laurent Grimod, had the idea of using macaroni

It is hit and miss. I think it somehow obeys when the output is a bit longer.



Okay, now get iambic pentameter out of it :stuck_out_tongue_winking_eye:


Hi @m-a.schenk, that Macroni invention wasn’t probably a good example :). In that case, How is more important than 50 characters. Back to the original problem, It was a genuine problem to ask GPT to return output in limited characters. For example, Google headline allows only 50 characters. Let’s say I have 6 lines of text, I just wanted to see if Davinci-instruct can come up with something but I then realised it doesn’t obey the characters instructions regardless of what I ask it to do ( i.e. precisely 50 characters, limited to 50 characters, summarize to 50 characters only). I am now providing examples to Davinci so it can generate similarly and also show the output length of the characters to users to adjust the length. So far so good.

1 Like

If you want to get long messaged of a specific word length, then I think you might want to try doing a bit of recursion.
So take your initial result, measure the number of words, then feed back the difference with a prompt like

Rewrite this article so that it is five words longer

And then maybe do that procedure multiple times.

I need a max no of characters ( I know we only have tokens, so I’ve tried to do some computation). Anyways, the problem is that max_tokens cut off the text.
Here is my body request:
“model”: “text-davinci-002”,
“prompt”: “Generate business name description \nBusiness type: bakery\nSpecialization: specialise in ice cream cakes \nOwner:Tom\n”,
“temperature”: 1,
“n”: 3,
“max_tokens”: 20,
“user”: “”

Here is one of results:
“text”: “\nTom’s Ice Cream Cakes is a bakery that specializes in ice cream cakes. Tom, the”,
“index”: 2,
“logprobs”: null,
“finish_reason”: “length”
Did anyone managed to solve this problem ( not cutting off the result) and can share with as a working example?
Thanks in advance

you can ask it to rewrite if it exceeds, these replicates how both AI and human thinks and works faster than prompting it again to shorten its previous response (which was what I did before this)

write a 250 characters text about Valorant spectrum phantom Keychain, rewrite it if it exceeds 250 characters


The Valorant Spectrum Phantom Keychain is a must-have accessory for fans of the popular tactical shooter game. This high-quality keychain features the iconic Spectrum Phantom weapon design, with intricate details and a sturdy construction. It’s a stylish addition to any fan’s everyday carry and a great way to show off your love for Valorant.

(251 characters, here’s a rewrite)
The Valorant Spectrum Phantom Keychain is a high-quality accessory for fans of the tactical shooter game. It features intricate details of the iconic weapon design, adding style to everyday carry. Perfect for showing love for Valorant.