How do you make GPT3.5 API use specific keywords?

I have a small Python webapp that uses GPT 3.5 Turbo to generate articles for a blog. Everything is working fine, except that no matter what I do, it will not use the list of keywords I provide while writing the article.

Here is the relevant bit of code.

Class ArticleGenerator():
    def __init__(self, keywords):
        self.keywords = keywords
        self.messages = [
            {"role": "user", "content": f'''You must use every word from the following list of words at least 
            once in the article: {keywords}. Do you understand?'''},
            {"role": "assistant", "content": f'''I understand. I will use every word from the list of words 
            you provided at least once in the article.'''},    
]

I’m using the same method to generate every other part of the article, all of which are contained in self.messages and they’re working great. This is the only instruction that seems to have no impact at all.

I’ve tried about 10 different ways and locations to include this keyword instruction, but it doesn’t work. Any tips would be much appreciated, and I can post more code if the problem isn’t clear from above. Thanks!

2 Likes

You can, in a single meta prompt:

  1. Write and print article based on topic in style
  2. these are keywords, where we must find a way to shoehorn each into the article by rewriting [list, list]
  3. Rewrite and print article with the keywords placed organically within the text or within expanded or new sentences.

The AI doesn’t have an internal memory or pre-thought. Here we let it see what it wrote originally and have it rewrite.

This is one of the options I’ve tried, but it didn’t seem to work either. After a little more research, I think I have to use logit_bias for this, but I’m not sure how to programmatically tokenize keyword list inputs.

Here the instructions aren’t followed one-by-one, but the result is satisfactory

That gave me roughly the same output as the other solutions I’ve tried. The only difference between your example and mine is instead of writing out the keywords [NVidia, AMD, etc], I have them stored in a list that I’m using in an f string named {keywords}. Would that be making a difference?

If you pass a list object within an f-string, it should appear the same as if you wrote it out in the same spot. You can surround it with your own brackets or quotes to set the list apart.

This is just a case in general of outsmarting the AI and pushing it to the limits of how it creates answers. The language of a prompt isn’t going to magically transform the next token it produces in the middle of a sentence into “keyword” without a semantic reason for it being there; you might have move up to vital phrases or sentences that need to be in the article at the 1/3 point (for example) to really reshape the generation.

1 Like

Gotcha, I appreciate the response. I’ll continue messing around with the prompts and see if I can get it closer

You could take your list of keywords, identify their tokens, then ramp up the logit bias for those tokens to help increase the probability of them being selected.

Try something like this:

import openai 
openai.api_key = "your key here"
keywords = ['GPT', 'Nvidia', 'Generative AI']

input_messages =[{"role": "system", "content": f'''Generate articles that follow the users instructions exactly.'''},
                 {"role": "user", "content": f'''Please generate an article using every keyword from the following list of words at least once in the article: {keywords}.
Do not cluster the keywords together. The keywords must be sprinkled through the article in an organic manner, and each one used sparingly.
Ensure the article is at least 500 words long, and no longer than 1000 words.'''},
                 {"role": "assistant", "content": f'''Here is an article using each keyword in this list {keywords}:

Article: '''}]
print(input_messages)


completion = openai.ChatCompletion.create(
    model="gpt-3.5-turbo-16k-0613",
    messages=input_messages
)

Edit: Screwed up the spacing when I transferred it into here. Should work now.

Note that keywords will be rendered as something like ['apple', 'banana']
You might want to use something like article: {", ".join(keywords)} instead.

Now, that being said, you’re essentially trying to have the model solve the “caveman challenge,” which it’s famously not great at.

1 Like

I think putting the actual keywords into the system message (instead or as well) would be more likely to get the model to “behave,”

Alternately (additionally), following the user message with an assistant message saying something like,

Absolutely! Here is an article about XXXX, I will ensure I use the keywords KW1, KW2, and KW3 throughout the article.

By putting the keys and instructions in system, user, and assistant messages it will greatly increase the likelihood of the model choosing to use the keywords.

Lastly, I still think taking the first token in each of the keywords and jacking its logit_bias to 10 or higher will be the most effective way to ensure they get used.

If all else fails, do all of them.

1 Like

It seemed to do fairly well in my tests of that above. The models understand python quite well, I’d actually think that it would handle the task better if we leave it with the brackets, because it then has a more formally list to work with.

If generation time or API cost are less a concern than quality, doing what I have above there with a GPT-4 model does even better.

So I tried a few variations of what you’re suggesting here in both GPT-3.5 and 4 and haven’t been able to get it to work yet. 3.5 seems to ignore the instruction completely, writing ‘refrigerator’ when ‘fridge’ is a keyword, for example. When I tried it in 4, it spun for 10 minutes before timing out.

I believe the issue is that my instructions are overly complex. There are 13 more pairs of user/assistant instructions after what I posted above due to how specific I need the H1s/H2s/H3s and formatting in the paragraphs.

I’m going to try working with GPT-4 and trimming down my instructions. If that doesn’t work, I think looping over the keyword list with tokenizer like @chrstfer is suggesting might do the trick. That’s going to flex my newbie Python skills, but I’ll update if I get it working. Thanks for the suggestions everybody.

Quite simply: chat gpt-x models do not follow multi-shot training well (while base GPT-3 rapidly understands). The models are too pretrained to learn. It can sort of work as a mind-warp to make a new personality or beliefs. At most, you’ll be giving unwanted biases when it can’t diverge from examples but instead recites them.

You should find a way to instruct. Again, making the AI say words it had no reason to interject is a challenge for which I don’t have an easy answer other than expanding the on-topic success I showed earlier.

1 Like

making the AI say words it had no reason to interject is a challenge

Isn’t this the point of logit_bias though? Also, it does have a reason to interject them in most cases, i.e., ‘fridge’ instead of ‘refrigerator’.

I don’t believe this is something I can instruct. The keywords change every article and they have to be specific for SEO purposes.

With the API, the instructions can change every article and API call, the particulars of operation can be by injected text, and you can have a wall of behavior spelled out to get that list of new words in the new article.

You might even SEO just by telling the AI it is a SEO optimizer and have it extract web metadata and enhance the value of the first paragraphs. SEO is not simply spamming Google with irrelevant words. Search indexers also have intelligence.

1 Like

I know what SEO is, hence why I’m working on a program that writes SEO optimized articles. The keywords come from research done prior to article generation, and they must be these specific keywords, not whatever the API feels like spitting out.

the particulars of operation can be by injected text, and you can have a wall of behavior spelled out to get that list of new words in the new article.

Yes, this is what I’ve tested 20 different ways, including your suggestions above, and it’s not working. Whether it be a system message, user message, telling it to rewrite the article and inject keywords, etc., it seems to be ignoring that instruction completely, or at least not knowing what to do with it.

Yeah, thats too much for it. Pare them down. If you’re trying to get it to generate 13 separate articles, then do them all as separate queries. If they need to be related, you can do ask it to summarize between each call. So like, do article 1, then as a separate call to the API, “summarize this article. Be concise but ensure key details are included. Shorten it significantly.”; then do article 2 and include the summary as "summary of article series so far: " in your system prompt. After you get the second, you send over that summary and the new article for summarization. Wash rinse repeat. This might also save you money because you can do the summarization with a cheaper model.

Also, @_j has a great point with “You might even SEO just by telling the AI it is a SEO optimizer…”, I bet that would improve the quality of the content a bunch, once you split this monster query into a bunch of smaller ones.

~

You need to separate the logic from the keywords, not just in the code but in your mind. The keywords are interchangable and incidental, they are just a bit of fuel to get an article to come out of the bot the way you want.

You might have more luck by giving it more keywords than you want to appear and letting the bot pick them. Also, if you tell it its an SEO guru it might start swapping words out if it “thinks” they would fit better. I wouldnt immediately discard that, check with your favorite keyword software/site to see if they actually are. Sometimes its surprisingly good at picking paths I dont see, and while I havent done SEO with it, I really do bet it’s be good at it.

1 Like

Without knowing your keywords, I can’t comment on the best solution, but my advice would be to just do it in steps:

  1. First, tell the model to generate a list of sentences on the topic, one per keyword. This is very easy for GPT in general.

  2. Next, ask the model to use all of those sentences in an article on the topic. Here, I get GPT to use the following keywords: (“breakfast cereal”, “basketball”, “flashlight”, “pajamas”, “furniture”) in an article about graphics cards. It’s a pretty bad article. You can probably improve easily by providing some system message style direction on style, tone, intended audience, etc.

https://chat.openai.com/share/bd4f5948-53f9-428a-a2ac-27b9191f3638

This works because the first task is something that isn’t hard. And, once you have those sentences made, the model has a path through it’s weights and probabilities for tying the intended topic to the keyword every time because it already has the sentences. It can even bend them to flow more naturally on the second generation.

Trying to do the generation all in one go is going to be a task that LLMs are specifically ill-suited to achieve because it forces the generation to keep shifting and jumping to different probability spaces in a way that nothing in it’s training data gives it a roadmap to follow. So, just give it the roadmap ahead of time (that it gives to itself!), and it’ll find it’s own way on the second go around.

1 Like

Why? Maybe it was just what my chatgpt defaulted to and you used gpt3, but if not well… cGPT4 is perfectly capable of generating an acceptable, even quite good, article. This kind of thing, having it generate sentences without knowing that they’re going to be in a single article and then trying to get it to shoehorn them into an article leads to what you got, a weirdly disjointed mishmash of paragraphs with keywords sort of stuffed in there? Like, who thinks of “comfort” with a graphics card?

Not trying to be a dick, but if youve got GPT4 this is both unnecessary and detrimental imo. If you want to do this method, generating sentences beforehand, you need a lot of different keywords, then bucket related keywords together and determine a topic beforehand. Iteratively, so it doesnt just list the keywords together either.

Check out this code interpreter completion of my original example. It could 100% be cleaned up to improve it for unguided automation, but i also wanted to give some examples of ways to get the bot to be an effective writer: AI Advancements: GPT-Nvidia Synergy

Compare that with this, which was built off of the link you sent: GPU Insights Unveiled which took longer and was more convoluted.