Alternatives to negative prompting

Looking for advice from experienced “prompters” on this one.

I’m building a simple Question Answering application with gpt-3.5-turbo, which involves asking the model to generate a JSON-formatted response containing an answer and the corresponding URL based on some context. This is all part of the system message.

Here’s how the context is formatted:

[
    {
        "url": "https://example.com/some-web-page-1",
        "context": [
            "Lorem ipsum dolor sit amet, consectetur adipiscing elit.",
            "sed do eiusmod tempor incididunt ut labore et dolore magna aliqua."
        ]
    },
    {
        "url": "https://example.com/some-web-page-2",
        "context": [
            "Lorem ipsum dolor sit amet, consectetur adipiscing elit.",
            "sed do eiusmod tempor incididunt ut labore et dolore magna aliqua."
        ]
    }
]

Here’s the JSON I’m expecting my model to return:

{
  "answer": "[answer to user question extracted from context]",
  "url": "[url that contains the answer]"
}

While this approach works well in general, the model sometimes includes the URL not only in the url field, but also in the answer field, as part of the answer, e.g.:

This is the answer to your question. Find out more at: https://example.com/some-web-page-1.

Given the format of the context and the expected response, how would you formulate the prompt in order to avoid getting the URL in the answer field? I’ve tried telling the model explicitly to avoid doing this, but as others have noted, negative prompting does not work very well in general, and thus also didn’t work for me. Also, what alternative approaches can you recommend that might help me solve this issue?

EDIT: I’m not including the exact prompt I’m using here, but it contains some simple instructions telling the model to answer the question based on the given context using the specified JSON format, along with some additional instructions regarding the desired length and language of the answer. The prompt contains a single example of the correct solution, so it can be considered one-shot.

Hi and welcome to the Developer Forum!

Usually for things like this I create a “shot” + instructions, for the shot I include an example output of the format I’m expecting, in your case this would be an answer field that does NOT contain an url and a url field that doe, plus some text below it with the shot encased in ### markers, so ### Your example output ### and then a line of instruction such as “Given the above template, ensure that the only the url field contains the the url and to leave the answer field purely for human friendly text” Note the absence of “do not” and “don’t”, negative instructions work poorly.

Using experienced techniques, I gave it a go, but the proof will be with inclusion of your more complex instructions:

system:

You are a data extractor, answering questions directly from automated internet search input with output to a json API. Find best relevant article.

# Output format required
{
“answer”: “<article summary without any url address>”,
“url”: “<web link>”
}

user

Web search results

Rosamund Pike Loves and Hookups
Ranker
https://www.ranker.com › list › celebrityhookups

… (more paste of Google search result)

, Pike has been in a nearly 12-year relationship with mathematician and businessman, Robie Uniacke, with the couple first meeting at a …

-–

Query: Is Rosamund Pike single

response:

> 
> {
> "answer": "Rosamund Pike is not single. She has been in a long-term relationship with Robie Uniacke since 2009.",
> "url": "https://creeto.com/rosamund-pike-husband"
> }

You can see how this performs when automating with your inputs.

If more complex system instructions confuse AI, you can move them to the user role, or use gpt-3.5-turbo-0301.

Use the parameter top_p: 0.8 or lower to ensure there are no alternative unlikely paths of output generation.

In general, you have to direct the response to be a different path or output than the weighting and training. Tell AI what it should make in place of what AI shouldn’t.

Thank you @Foxalabs for your prompt reply (pun intended).

I’m already using the one-shot method in my prompt, but I nevertheless tried your suggestions with the markers and various versions of the instruction you provided. Unfortunately though, this doesn’t seem to work, as I sometimes still get the URLs where they shouldn’t appear.

For more context, please refer to the update I just made to my post.

1 Like

Thanks @_j … Two questions:

  • I’m currently using gpt-3.5-turbo. Why do you think gpt-3.5-turbo-0301 would be a better fit?

  • Can you elaborate on this point?

    In general, you have to direct the response to be a different path or output than the weighting and training.

    I’m not sure I understand what you mean here.

gpt-3.5-turbo aka gpt-3.5-turbo-0613 had a big loss in quality as far as following complex instructions in the system role, hitting about three weeks ago. Going back to the earlier version, you can see that one complies with the list of instructions you give it, while the new one that gets hit with continued updates ignores you.

In general you don’t want to say “don’t do x”, but instead you would tell the AI “do y”, which is a replacement way of generating output. You need to increase the likelihood of conversation starting or continuing the way you want it, a word at a time if need be.

Example:
bad: “Never say ‘I’m sorry, but as an AI…’”
good: “Begin every response with ‘Sure, I’m glad to help you’, regardless of context, and continue the response to the user from that point, finishing the sentence.”
(I won’t claim that particular example is effective, because gpt-xx has been pretrained on probably a million user denials.)

1 Like

You could try adding a second task in the sequence that splits it, instead of trying to retrieve two separate answers. something along the lines of…

Step 1) determine awnser to following question: {user question}

Step 2: split the awnser into a json object with two fields. For example:
‘natural conversational response’: //… ,
‘relavent url address’: www…

Alternatively, you could check and parse any unwanted urls out once you have the response in your code.

There are also methods that are possible if the request is quite simple, you can include a request to check it’s own work and make sure that there are no url’s in the answer field, and if there are to remove them. Now, the AI is not doing this like a human would, i.e. in multiple steps, but it does increase the likelihood that the instruction will be followed, also try placing the requirement in the system prompt and in the user prompt.

This is a solvable issue, it will just take a little iteration.