Prompting based on contextual data in format of questionnaire

Current setup:
Model: gpt-3.5-turbo-1106
Chat completion api: openai.chat.completions.create
JSON Response

I have a long questionnaire whose answers are mainly free text, but also include amounts and dates. For some questions I would like to generate 5 different suggestions and display them to the user based on the previous entries/answers.

My current prompt is based on that best practice here.

Results: The API returns a JSON with 5 different responses as requested. It seems to me that the more questions:answers are contained in the context, the worse the suggestions become.

Is there a better way to do this?
Am I using the right API function?
Is there a better way to transmit the contextual data?
I give the user the option to generate new suggestions for the same question by clicking a button. However, the same prompt usually leads to the same result. How can I make the suggestions differ more?

Any answers or links to ressources are highly appreciated.

Thank you!

Prompt:

Create a JSON array named options with values of type string whose content is a selection of 5 different ${topic} based on the following data:
  Data: """ 
  `${e.question}: ${e.answer}\n`
  `${e.question}: ${e.answer}\n`
  `${e.question}: ${e.answer}\n`
   .....
  """ 

API-Call

      const completion = await this.openai.chat.completions.create({
        messages: [{ "role": "system", "content": "You are a helpful assistant that generates suggesstions." },
        { "role": "user", "content": prompt }],
        model: this.model, 
        response_format: {type: "json_object"},
      });

Hi there - there’s a couple of low-hanging fruits that you could try to see if it helps unless you’ve done that already…

The first one would be to use one of the GPT-4 models, which generally have better reasoning capability.

The second and/or complementary option would be use to use few shot prompting and include a few specific examples in your prompt for the desired style/quality of responses.

Personally, I would also expand a bit further on your system instructions to explain in greater detail what you would like the model to do and what the style of output you are looking for. Currently, other than the prescribed JSON format, you leave that a bit open-ended.

You might also want to check out these additional best practices:

https://platform.openai.com/docs/guides/prompt-engineering/strategy-write-clear-instructions

P.S.: I don’t have an immediate idea on how to generate different output with the same prompt and context/input and without including the previously generated suggestions/responses in this prompt. But there’s a lot of smart heads around in this Forum, so somone else might.

1 Like

Thank you for your answer! So one way to do it is to to use the existing suggestions and exclude them explicitly in the prompt?

No, I meant the opposite. Bear in mind that the model has no memory. In order for the model to generate the new suggestions it would need to know the existing suggestions. So you’d need to create a mechanism for feeding the model the previously generated suggestions into the prompt that asks it to re-generate the suggestions and be explicit that the new suggestions should be different from the existing ones.

Thank you for the quick reply! What’s the better way to achieve this?

I can think of two ways (Using messages, Editing first prompt):

Messages:

  const completion = await this.openai.chat.completions.create({
    messages: [{ "role": "system", "content": "You are a helpful assistant that generates suggesstions." },
    { "role": "user", "content": prompt }],
    { "role": "assistant", "content":  { options: [ "opt1", "opt2", "opt3"] }],
    { "role": "user", "content":  "*Regenerate new suggestions and exclude the existing ones*" }],
    model: this.model, 
    response_format: {type: "json_object"},
  });

Prompt:

      const completion = await this.openai.chat.completions.create({
        messages: [{ "role": "system", "content": "You are a helpful assistant that generates suggesstions." },
        { "role": "user", "content": "Create a JSON array named options with values of type string whose content is a selection of 5 different 
...... and exclude the following suggestions opt1, opt2, opt3" }],*
        model: this.model, 
        response_format: {type: "json_object"},
      });

Thank you!

I’d go with the second option. I would however slightly change the language to “Create a JSON array named options with values of type string whose content is a selection of 5 different … and that are different from the following options: opt1, opt2, opt3”

I tried it that way and keeps responding the same answers that were excluded in the prompt.

Do you think gpt4 would make a difference? Can you think of another way achieving this?

Thank you in advance!

      const completion = await this.openai.chat.completions.create({
        messages: [{ "role": "system", "content": "You are a helpful assistant that generates suggesstions." },
        { "role": "user", "content": "Create a JSON array named options with values of type string whose content is a selection of 5 different {topic} and that are different from the following options: opt1, opt2, opt3 based on the following data:
  Data: """ 
  `${e.question}: ${e.answer}\n`
  `${e.question}: ${e.answer}\n`
  `${e.question}: ${e.answer}\n`
   .....
  """  }],*
        model: this.model, 
        response_format: {type: "json_object"},
      });

The user prompt is still a bit difficult to decipher. You need to more clearly delimit the question and answer pairs from the options.

It’s a bit difficult to do it in abstract but I would amend the user prompt as follows:

Create a JSON array named options with values of type string whose content is a selection of 5 different {topic} drawing on the data provided. The options must be different from the three existing options provided.
Data: “”"
${e.question}: ${e.answer}\n
${e.question}: ${e.answer}\n
${e.question}: ${e.answer}\n
Existing options: “”"

You should definitely alternate the models to see if it yields any desired change. Again, difficult to say in abstract whether GPT-4 will make a difference. It somewhat depends on the complexity of the input data.