Greetings from Australia!
I am trying to build a GPT-3 powered debate moderator and one of the features is that it needs to take two seemingly opposing arguments and identify the relationship between them. Following are prompts I sent to davinci_instruct. (temp=0, top_p=1)
The screenshot on the left had the desired result because GPT took the two seemingly opposing statements and synthesized a new idea that established a causal link between them which is that dogs want to protect due to their loyalty. Awesome!
The picture on the right has the exact same prompt but in a less “polite” fashion. I reproduced them a few times with the same results. Saying “please” yields me the desirable answer.
I then tried formatting arguments into one claim + three warrants(reasonings to back the claim). Then asked essentially the same question as the first screenshot.
I didn’t get the answer I wanted but maybe the prompt made it more confusing, but the answer is technically correct nonetheless in this context. So I tried again with basically the same prompt as the first time but with different IDs for the statements.
This is essentially the same prompt as the first one, I was even “polite”. But I didn’t get the same dialectical synthesis like in the first prompt…
Can anyone weigh in on this? Any help is appreciated!