How to get GPT-3 to stick to prompt input?

I’m trying to build a tool that will answer some questions about a product from a given text.

Using the openai’s API I’m sending the text and question.
Most of the response is ok, but the issue is that OPT-3 is adding data that is not in the text.

I guess since it was trained with random data from the internet, it has information about these products and it’s adding the response as it’s the most likely answer to return.

How do I make it stick to the prompt input only?

You have to instruct it to restrict itself to the given product information.

Any idea how to instruct it?
I tried things like: “write <a/b/a> based on the above text only.”
But it didn’t make much change in results.

So I tried a few things, and this worked pretty good:

Based only on the following information, write blablabla. If the information is not available, write "I don't know".
----
<<text>>
----
BLABLABLA:

If anyone has any more optimization suggestions I’ll appreciate it.

1 Like

Give it more / better examples

don’t say what you don’t want

give it more of what you want :slight_smile:

if you need any help rewriting it lmk

1 Like

It’s helpful to include some real life context in the prompt. For example, “a product expert at [product-building company] answers potential user questions based on [the product manual] as follows…” or “a marketing [intern/manager] at a [tech/social media/other company] is writing [answers/summaries/snippets/ads] about the company’s products based on [social media posts/online reviews/helpdesk transcripts] as follows…”

2 Likes

"It’s helpful to include some real life context in the prompt. For example, “a product expert at [product-building company] answers potential user questions based on [the product manual] as follows…” or “a marketing [intern/manager] at a [tech/social media/other company] is writing [answers/summaries/snippets/ads] about the company’s products based on [social media posts/online reviews/helpdesk transcripts] as follows…”

PLUS

"Give it more / better examples

don’t say what you don’t want

give it more of what you want"

This is the way to do it!

1 Like

There’s no real tight solution to stop GPT3 from hallucinating. There are some tricks, that were suggested here. I believe a tighter solution can be achieved with fine tuning. So for example you give triplets of context-question-answer, and keep the answer as extractive as possible (similar wording to context).
This kind of thing got me reasonable results. But still, you get hallucinations once in a while

1 Like

Thank you so much for the detailed advice.
I didn’t quite get how to implement it…

I have a product review article, let’s call it [[text]].

So far I did something like this:

Based only on the following product review, summarize how  [[product name]]  has evolved from previous models or releases to provide improvements, address issues, and explain what sets this model apart. If the information is not available, write "I don't know".
------
[[text]]
------
A comparison of [[product name]] to previous models or releases:

By this example:
“a product expert at [[product-building company]] answers potential user questions based on [[the product manual]] as follows…”

Do you mean something like this:

A product expert at [[product manufacturing company]] answers potential user questions about [[product name]] based on:
[[text]]
A comparison of [[product name]] to previous models or releases:

?

1 Like

I think they mean you should feed it one or two examples of what you want … or use a dataset with many examples of what you want for fine-tuning. In other words, you’ll need to produce stellar examples of what you’re wanting GPT-3 to return. Write out an entire review (or a dozen) that reflect what you want the language model to give you.

1 Like

I build many Q A model Using GPT -3
U just instruct properly or you can use few shot learning approach to what result you want

I’m struggling fine tuning Q/A model. Would you mind to share jsonl file??

use few shot learning approach which gives good result