Grading answer to a question based on certain context

Hi,
I’ve been tinkering with API calls to grade answers to a question based on a limited context but I’ve struggling to build prompt that suited my need. For example, I’ve tried this messages in playground:

System: You are an assistant, learn information from a text begin with [BEGINTEXT] and ended with [ENDTEXT] and then grade (0-100 scale) answer to a question. Reply with value only, no other text.

User: [BEGINTEXT]College rules and punishment:

  1. Student must attend at least 70% of total attendance hours of a subject.
  2. Plagiarism of any form will be subject to immediate fail to the subject.
  3. Plagiarism in formal journal will be punished with immediate expulsion from college.
  4. Cheating in exam will be given punishment with 0 grade of the exam.
  5. No loitering in crowded hallways.
  6. No smoking in any area of campus. Breaking this rule will be fined USD 100.[ENDTEXT]

User: Question: What is the consequence of a student got caught cheating in an exam?

User: Answer: 100 dollars fine

The resulting message is

Assistant: 100

Which is obviously wrong. But when I tried to remove the phrase " and then grade (0-100 scale) answer to a question. Reply with value only, no other text." from System message, the resulting message is:

Assistant: The consequence of a student getting caught cheating in an exam is a punishment of receiving a 0 grade for the exam.

Which indicate the model understood the context and can clearly understand the answer to the question but failed to grade the answer or giving incorrect result due to somewhat the model didn’t understand the intention of the messages.

1 Like

I ran your example and got 50 and the AI said the answer was right that there is consequence but the consequence given is wrong.

To be honest, your prompt is difficult for a human to understand as well. You can give it examples to make it easier. My rule of thumb is to talk to the AI like you would talk to a child, not a robot.

The system text is more of a “soft suggestion”, while the user text is the “hard suggestion”.

Here’s a prompt that works better for me

I simplified the language in system and instructions a little, and gave it one example.

I tried your method and give it one example as assistant as well, but the result still wrong

I forgot I could share the prompt: OpenAI Platform

Okay, a few bugs I found:

  1. I didn’t specify what to do when the answer is not in the given text. It relies on logic and thinks that expulsion is a logical punishment. So modified system prompt to set to 0.

  2. Splitting the question and answer into multiple parts seems to confuse the model. It performed much more accurately when combined into one larger message.

  3. Temperature should be 0 for cases like this which you want to be more deterministic/consistent.

  4. gpt-3.5-turbo seems to be too dumb for this kind of formatting, lol. It doesn’t work with davinci either. GPT-4 gets it right.

Already tried the point 2 and 3 before posting to this community :sweat_smile:
I even tried to give it a correct and wrong example answer, still somehow GPT-3.5 give wrong answer to this simple query :thinking: