Stop sequences being ignored

I have fine-tuned a babbage and a curie model with data formatted according to the docs and then cleaned with the tool provided. The tool recommended adding a \n stop sequence to the end of the training completions, and I approved and it worked fine. The use case is for convertin text into commands, for which I have an interprter. An example line of training data is:

{“prompt”:“What’s the nearest restaurant? ->”,“completion”:" findClosest(restaurant)\n"}

The fine tuning all seemed to work fine, but when I test it with the command:

openai api completions.create -m babbage:[my_fine_tuned_model_id] -p “What is the nearest restaurant ->” --stop “\n”

I get output like:

What is the nearest restaurant → findClosest(restaurant)

}, getHours(taco)


In other words, it repeats the prompt, the arrow, then gives the correct output (almost always), then ignores the stop sequence \n, and then continues with a bunch of random junk.

What am I missing? Am I using the stop sequence wrong? Do I need to escape it? Did I format it wrong or something else?

Separating the stop sequence from the rest of completion using a space ’ ’ should likely help.

Alternatively you can try using max_tokens hyper param.

Thanks for the ideas! I have tried the max tokens, but it doesn’t resolve the issue. And as for the space ’ ', I will try adding that to the training data and redo it.

But my understanding was that when the output includes the stop sequence, it would stop, regardless of what is in the training data. And since there are line breaks in the output, I would have assumed that would stop it. Does anybody know how exactly the stop sequence is triggered?

Might try with a more “unique” stop sequence like \n###\n or similar…

What are your settings for the completion? (Temperature, freq_pen, etc…)


SOLVED: Thank you for the response! I changed the stop sequence to ### and trained for a few more epochs, based on another thread on here and it seems to be working.


Good job :slight_smile:

Thanks for the upgrade.

FWIW, as outlined in the “other thread” you may have seen or mentioned, I used ++++ for the separator and #### for the stop, which works well for me.

Plus do this:

See Also:

Fine-Tuning In a Nutshell with a Single Line JSONL File and n_epochs