I wrote a function to ask gpt to answer some questions about a document.
Here is the function
def ask_questions(questions, text):
answers=[]
for question in questions:
prompt=f'''
review content of {extracted_text} and answer questions in {questions_gm} and provde answers to each question only in boolean values (Yes and No).
Do not return the questions in the answer. Answer should be "Yes" or "No".
Don’t justify your answers. Don’t give information not mentioned in the {extracted_text}.
'''
#get answers from gpt
response=client.chat.completions.create(
model="gpt-4-0125-preview",
messages=[{"role": "user", "content": prompt}],
temperature=0,
)
answer=response.choices[0].message.content
print(answer)
print(type(answer))
if answer.endswith("Yes") or answer.endswith("yes"):
is_true = 1
else:
is_true = 0
answers.append(is_true)
print(answers)
return answers
when I call it, it showed me how the answer is produced
No
No
No
<class ‘str’>
[0]
No
No
No
<class ‘str’>
[0, 0]
No
No
Yes
<class ‘str’>
[0, 0, 1]
I was expecting it would answer questions one by one, I am curious as why it assigned “No” too all three questions and returned three “No” on the first question. (There are three questions in questions_gm, the answers to the first two is No and the third one is “Yes”. ) Is it possible to get it answer one question at a time? Meaning the expected answer would be
No
<class ‘str’>
[0]
No
No
<class ‘str’>
[0, 0]
No
No
Yes
<class ‘str’>
[0, 0, 1]
Not to say that the answers change, which is entirely a different issue. I would appreciate advice on how to keep the same answers.
questions_gm=[
“Does individual/group projects have opportunities for feedback for improvements?”,
“Is there frequent knowledge checks to assess learning beyond midterm and final exam and final projects?”,
“Is there any introspective learning opportunities, such as short writing assingments or reflection beyond exams?”
]
Do you think the print is messing this up? I print those variables to help me understand what it is doing. I am not sure if I can alter the prompt to get desired answer.
You are mixing together the data and the instructions. That makes it hard to answer.
You are also making the AI write a series of Yes responses with no surrounding context, and the AI will get lost quickly.
Much better format:
System:
You are a data processor AI that answers questions with only [Yes, No] as the only choice of answer (starting in uppercase as shown). A careful decision must be made, as no other output is accepted.
Input questions will be in a json container. Output answers will also be in a similar mandatory json container. Example:
Input:
{“question 1”: “Is the sky green?”, “question 2”: …}
Output:
{"answer 1: “No”, …}
user:
/## Documentation context:
{documents}
/## Questions about documentation
{
“question 1”: “Does individual/group projects have opportunities for feedback for improvements?”,
“question 2”: “Is there frequent knowledge checks to assess learning beyond midterm and final exam and final projects?”,…}