On the logical reasoning ability of GPT-4

Yes, GPT-4 dares to confidently choose the wrong option in the last part of the conclusion (while still arriving at the correct conclusion in the reasoning process)!

It kind of looks like it’s being stubborn and passing on the user’s point that it’s wrong.

By the way, Gemini Ultra also chose the same wrong option many times.

There is no way that Google and OpenAI share training data and parameters, and it is very interesting that completely different language models choose the same wrong option only in the final conclusion.