Is it possible to stop a completion at the Nth occurrence of the stop sequence?

I have a customized model that was trained on a dataset with all the prompts as empty strings. It works pretty well, my only issue is that when I set the stop sequence the completion ends up being very short, but without the stop sequence the completion ends up being a very long paragraph. I’ve noticed that these very long completions are only relevant for the first few occurrences of the stop sequence, and then end up going off topic.

Is it possible to stop the completion after perhaps the third instance of the stop sequence? I’ve seen methods where you create a loop and then feed the completion back as the new prompt, but this method isn’t working for me as it just causes the first completion to be repeated.

What are your settings? What are you using for the stop sequence? Was the stop sequence included in the fine-tuning dataset?

Have you tried not using a stop sequence but lowering max_tokens?

PS - welcome to the community. Hope you stick around!

By settings do you mean temperature, etc? I’ve tried various combinations of temperature, frequency penalty, and presence penalty, but I think the root of the problem is with the dataset I’m using, which isn’t really fixable since I am automatically generating it (this is also why all my prompts are empty), which is why I’m trying to find a workaround.

All my completions end with “.###”, and then I set the stop sequence to “###” since I want the period at the end.

My understanding of the max_tokens setting is that it doesn’t actually influence the completion, it’s just a way to cut off the completion to limit credit usage. I think this would be my last resort option if there isn’t a better way to shorten the completion more naturally.

Can you ask the question so that it numbers each answer? (maybe by giving an example or two with a leading “1.” and “2.” in the prompt) Then you can use a stop sequence of “3.” or “###\n3.”

It depends on the structure of your prompt

You may be able to do the same thing by modifying the prompt so the number is embedded in the ### part. Eg ##1##, ##2##, ##3##. and then you can stop on ##3##

1 Like

Maybe “Number each sentence in your reply.” might work and “Start each sentence on a new line.” - Just random ideas. Like I said, it depends on the wording of your question and if it will work in your specific circumstance.

1 Like

To go along with this, you can reverse the ordering of the numbers to get it to stop where you want…

ie

  1. Sentence
  2. Sentence

OUTPUT

  1. something
  2. something
  3. something

Reversing it like that tends to help it stop better (at least on Davinci)… I think text-davinci-003 is better at knowing how to a list of 100 or 50 topics.

ETA: I had the numbers right but autoformatting munged it lol

1 Like

The problem here is that since my dataset is prompt-less, completions will continue the prompt rather than answer it, which I don’t mind for my usage for the most part. It just means that instead of giving a prompt such as “Tell me a story about…”, you would instead say “This is a story about…”.

I have tried formatting my dataset generation to give each example some sort of prompt, but I’ve found the generated completions to not be as good then.

Given there are no other suggestions, you may have little choice but to do some post processing of responses:

  • set max_tokens high enough to allow at least 2 full sentences. (maybe allow for 3)
  • but set max_tokens low enough to preserve tokens (save cost).
  • Leave the ###'s in the responses.
  • And then throw away everything after the second ### (after the completion comes back to you)
  • Finally remove the extra ### at the end of the first sentence
2 Likes

I agree this is likely the best/only solution. I hope in the future they add a way to continue the completion beyond the first occurrence of the stop sequence.

1 Like