Negative prompts for text generation

Hi, I want to use gpt-35-turbo for text generation, I have a list of words that I don’t want it to use. I have passed this argument into the prompt in many different ways including the list of words to exclude. However the output always contains some of those words. I have also tried different temperatures. But it seems the model just cannot avoid using those common words/adjectives. Is there a negative prompt for text generation? or any work-around? Any help would be much appreciated.
Thank you in advance!!

1 Like

Hi @elm, thank you so much for the tip. However I just got back this error:

InvalidRequestError: Invalid key in ‘logit_bias’: stunning. You should only be submitting non-negative integers.

Seems like it cannot be set to negative values?
Thank you in advance :slight_smile:

Welcome to the OpenAI community @miryam.ychen

Negative prompting i.e telling model not to do something isn’t inherently supported unless specified. Usually it is not recommended to use negative prompting.

If you want to solve this with prompt, you’re going to have to redesign your prompt concisely with clear instructions of what to do, instead of what not to do.

But if you want to ban certain words out of completions, then @anon22939549 has pointed in the right direction using logit_bias.

Modify the likelihood of specified tokens appearing in the completion.

Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.

According to docs, you’ll have to pass a. JSON object mapping each token you don’t want to -100.

You can obtain the tokens using tiktoken tokenizer

1 Like

Hi Jake,

Thank you for pointing me to the right direction! While the blog you shared did not work for me, it was very insightful. I had to use tiktoken.get_encoding(“cl100k_base”) with the ChatCompletion, ChatGpt 3.5 turbo model, plus adding a space before a given word to get the right tocken id, then it worked :slight_smile:

I’m leaving this link for the forum with the details in case anyone is interested: [Reproducible] gpt-3.5-turbo logit_bias -100 not functioning

Thank you again for your time and attention :slight_smile:

1 Like