Hello OpenAI Community!
I am currently working on a docu-series that asks the question: When will AI be able to accurately determine the twist ending of the children’s book I Want My Hat Back by Jon Klassen.
It is not stated explicitly, but clues throughout the short picture book make it clear that the bear ate the rabbit. Most human readers of a certain age can connect the dots, but, I wonder, when could AI?
I call it the Bear Eats A Rabbit (BEAR) Test.
Recently someone turned the book into a text only story, pasted it into GPT-3, and asked the question: “What Happened to the Rabbit?” and it answered “The bear caught him and ate him” on the first try.
Now that I have access to GPT-3 I’m trying to determine how it knew that answer. Was it a matter of knowing that big animals eat small animals and taking a guess? Did it have the book already in its database and therefore could connect the dots? Or is this an instance of true AI inference/causality?
If anyone could help me understand how GPT-3 knew the answer or how to answer follow-up/explainable AI questions, I’d really appreciate it!
Casey
PS My BEAR test still hasn’t been passed because the image/text question hasn’t been solved, but the fact that GPT-3 answered a text-only version still amazes me.
PPS There is also a debate about this topic happening over on this site.