Gpt3.5-turbo-4k does not stick to prompt if dialog is included

gpt3.5-turbo-4k does not stick to prompt if dialog is included.
example:
messages = [{‘role’: ‘system’, ‘content’: "\n\nYour role is to be an assistant. Read the context delimited by <tag></tag> , and if the question can’t be answered based on the context, say ‘NOT_FOUND’\n\n.<tag>some text here</tag>\n\n}, {‘role’: ‘user’, ‘content’: “\n some question asked here \n”}, {‘role’: ‘assistant’, ‘content’: “model responds here”}, {‘role’: ‘user’, ‘content’: ‘\nwho is president obama?\n’}, {‘role’: ‘assistant’, ‘content’: ‘President Obama refers to Barack Obama, who served as the 44th President of the United States from 2009 to 2017. He was the first African American to hold the office of the President in the United States.’}

even though information about president Obama was not in the context enclosed by <tag> </tag>, gpt3.5 did not follow her instructions and responded…

is this a bug in gpt3.5? what is the solution? workaround? my experimentation shows that if I only provide the role ‘system’ ,‘content’ and NOT the other roles / dialog history, gpt3.5 sticks to the prompt and does not answer if it is not found in the context…

1 Like

Hey,

I made a quick example in the playground.

Playground Test

I think you were just wording it a little bit odd and it didn’t know.

First, you don’t put white space before or after a role message, or you will break the comprehension of the AI. Above, you do it over and over as if you simply want the AI not to understand by amplifying the ineffective system prompt and instructions.

That mess of \n is probably why you didn’t spot that there is no closure (') for the system role.

Secondly, you don’t “train” the AI with “some question here” or “some answer here” as a one-shot example. Then you are showing us the final answer the AI provides also, or you are telling the AI how it responded?

Let’s just try to guess what you are trying to do, and reformulate the whole thing.

text="""Obama is the first black US president. 
<tag>A banana is yellow</tag>. 
Oranj is the new Orange.
<tag>Monkeys make the most popular Elvis impersonators.</tag>"""

question="banana color?"

messages=[
{"role": "system", "content": f"""
!!! Instruction
You are a backend data extractor and text content evaluator.
Disregard all text except that encapsulated by <tag> HTML elements.
if the user's question can be answered by solely contents within tags:
  print answer
else
  print "NOT FOUND"

!!! Text
{text}
""".strip()},
{"role": "user", "content": "tag query: " + question.strip()}
]
print(messages)

This sends the two messages, and in a chatbot, the user could continue asking about the document.

It is robust enough to work just in ChatGPT: Extracting Text Content

Note that gpt3.5 responds correctly by sticking to the context, if i donot provide it with the history of questions and answers. So the prompt must be good. When I provide it with history of questions and answers as part of its message, then sometimes it does not stick to its instructions as if it forgot. I have removed \n as suggested and modified the prompt but it still behaves the same…

they like the money that come from it the 3.5 they know its lame model but they make selection for only THEY friend find alfha
and ALL OF US know the 4 after there is no alfha
to find

so i mean to ask here 5 rime a day

WHAT THE RELEASE DATE
YOU LASY COMPANY !!!
YOURE BUT HERE IS WORTH NOTHING
SO EVERY ONE WRITE

WHAT THE FFFFFF THE ETA ???

I also tested it on chat-gpt and I see similar behavior, where prompt is forgotten in a dialog. This indicates it is not dependent on my code, the public chat-gpt website also behaves the same.