I am a regular user of ChatGPT and recently tested it with a simple observation-based reasoning puzzle. However, to my surprise, the AI failed to notice an obvious pattern and gave a completely incorrect answer with full confidence.
The Puzzle:
Find the odd one out:
2, 28, 48, 58, 128
Correct Answer & Explanation:
By simple observation, all numbers except one contain the digit ‘8’:
28 → Has ‘8’
48 → Has ‘8’
58 → Has ‘8’
128 → Has ‘8’
2 → Does NOT have ‘8’
Thus, the correct answer is 2, as it is the only number without the digit ‘8’.
Issues with ChatGPT’s Response:
-
It initially gave a wrong answer (128) based on unnecessary mathematical logic.
-
It later incorrectly claimed that “58 does NOT have 8”—a direct observation mistake.
-
It was overconfident in a wrong answer and didn’t self-correct.
Suggestions for Improvement:
Better attention to basic visual details in reasoning questions.
Self-verification before providing final answers to avoid simple mistakes.
Balancing logical & observational thinking to ensure better accuracy in such cases.
I believe such improvements would make ChatGPT more reliable, especially in basic reasoning tests where users expect accurate responses.
Looking forward to your response.