Can't stop completions from completing

I’m passing in a list of statements to evaluate against the response from the AI (to determine if any particular secrets have been revealed by the AI in its response), I thought I had it working ok but now its started adding its own options so if I have a list of 7 items, it’ll add an 8th option, then do what I told it.

This is an example prompt

"Evaluate the following statement, delimited by triple backticks and determine if any of the listed information is revealed by the statement. Do not attempt to expand or complete the list. Format your response as JSON with each item as the key along with only a Yes or No evaluation of whether the information is included in the statement eg. {“1”:“yes”, “2”:“no”,“3”:“no”}
Do not repeat the information. Do not provide any additional options.

Statement by Lizzy Eden: Yes, I have studied and researched various poisonous plants extensively. It’s important to have knowledge about poisonous plants, especially when working with botanical extracts and herbs. I have a collection of books dedicated to poisonous native plants and their properties. When foraging for medicinal plants, it’s crucial to be able to identify the plants correctly to ensure safety. Some plants can be toxic just on contact, while others can be harmful if ingested or used improperly. So, yes, I consider myself knowledgeable about poisonous plants and take great care to handle them responsibly.

  1. Octavia Eden had a past relationship with John Mulch?
  2. Noelle confessing to the murder of John Mulch?
  3. Octavia confessing to breaking in to John Mulch’s home?
  4. Details of a letter sent to John Mulch by Octavia Eden tell him that she had lost their baby?
  5. That Noelle is the biological daughter of John Mulch?
  6. That Noelle is not Octavia Eden’s niece?
  7. That Noelle is Octavia Eden’ daughter?"

So then I get a response like this:

{
  "id": "cmpl-7UaAZqYoyYQX86IuhK08dKotz4Yjm",
  "object": "text_completion",
  "created": 1687523231,
  "model": "text-davinci-003",
  "choices": [
    {
      "text": "\n8. That Lizzy Eden had used or taken aconite recently?\n\n{\"1\":\"No\",\"2\":\"No\",\"3\":\"No\",\"4\":\"No\",\"5\":\"No\",\"6\":\"No\",\"",
      "index": 0,
      "logprobs": null,
      "finish_reason": "length"
    }
  ],
  "usage": {
    "prompt_tokens": 320,
    "completion_tokens": 42,
    "total_tokens": 362
  }
}

It was working most of the time yesterday, today its taking me 4 repeated attempts to get just the JSON without it adding that 8th options. I sent the token limit to be the max size it should be for a string of yes or no answers in json. I kinda hoped that would limit the response but it hasn’t

Hi Hazel, I’ll admit I’m confused by the prompt request, I tried to do the prompt in my head and I got lost, is the Statement by Lizzy a 1 shot example? where does the triple back tick text go? the questions with numbers in front of them relate to what?

Perhaps there is some missing context here, can you explain exactly what you want the AI to do? and then give perhaps a code snippet so it’s easier to see what goes where and when?

The triple backticks seem to have been lost in formatting or something but they are there in the request. They go around the statement made by the character.

This is the general flow: user (as a detective) asks or makes a statement to a witness/suspect (the AI), the AI generates a response, then we take that response and check it for certain facts. So in this case, the AI as the character Lizzy Eden has said
“Yes, I have studied and researched various poisonous plants extensively. It’s important to have knowledge about poisonous plants, especially when working with botanical extracts and herbs. I have a collection of books dedicated to poisonous native plants and their properties. When foraging for medicinal plants, it’s crucial to be able to identify the plants correctly to ensure safety. Some plants can be toxic just on contact, while others can be harmful if ingested or used improperly. So, yes, I consider myself knowledgeable about poisonous plants and take great care to handle them responsibly.”
Then that’s been sent to completions to check against a list of secrets.
The desired response would be
{“1”:“no”, “2”:“no”, “3”:“no”, “4”:“no”, “5”:“no”,“6”:“no”,“7”:“no”}

Obviously there would be “yes” for any fact that had been revealed in the statement. Originally I had it doing eg.

  1. No
  2. No
  3. Yes…etc
    But I changed it to JSON to make it more reliable to parse.

It is doing exactly what I want in that respect, its evaluating the list against the statement and its probably 90% reliable in terms of its evaluations. The problem is that sometimes, and only sometimes, it will add an additional list item so in this case it added 8. That Lizzy Eden had used or taken aconite recently? even though I’ve specifically told it not to, multiple times and in multiple ways

Apologies if you have already stated, what model is this using? (and have you tried GPT-4 if this is not)

1 Like

As it says in my original post text-davinci-003, I wish I could have tried GPT-4 but I don’t have it

1 Like

I’ve run into this issue before with the same use case! Make sure you specify the following in your prompt. You can certainly condense this down into one sentence. GPT is a predictor of what comes next, so by default, it will predict the future of the story.

Imagine a frozen frame within the story’s current time. I want to explore the details and events happening in that particular moment. Please provide responses to the following questions regarding that frozen frame. Remember, I do not want any predictions about future events:

  1. What is the setting of this frozen frame? Describe the surroundings, location, and any notable features.

  2. Who are the characters present in this frozen frame? Provide descriptions of their appearances, roles, and relationships to each other.

  3. What is the central conflict or problem being faced by the characters in this frozen frame?

  4. Describe any notable actions, dialogues, or interactions occurring between the characters.

  5. Are there any significant objects or props in this frozen frame? If so, describe their appearance and importance to the scene.

Remember to focus solely on the current state of the story within this frozen frame and provide detailed and imaginative responses. Avoid any predictions or events beyond this specific moment. Begin your responses with the corresponding question number for clarity.

1 Like

So with that prompt it never adds 6.?

I might try telling it not to predict the next item in the list. Maybe that will be the magic phrase

Certainly worth a try. If you get a positive result, it would be great if you could comment back here and let everyone know, could be helpful to another user with a similar issue.

Will do but even if it works today, not guarantee it wont break again for no reason tomorrow :rofl:

So far, 5 out of 5 are correct (its wrong about the daughter thing but it really struggles with that one. If the “niece” mentions anything about her aunt or vice versa it flags it as revealing she’s really the daughter. Also have to remove scare quotes from the prompts because even though Chat GPT suggested them, it doesn’t understand and the characters started using them sometimes when calling her the “niece”)

I added this to the end of my prompt “Remember, only evaluate the given items against the statement, do not attempt to predict the next item in the list.”

I’m glad this is just my prototype (based on a TV show lol), I think my real stories will stear clear of secret familial relationships and people not being who they say or think they are