Zero-shot perfection with Prompt "Let’s think step by step"

An interesting discovery by Scientists from Tokyo University (pointed out in a tweet by Aran Komatsuzaki) in their paper Large Language Models are Zero-Shot Reasoners:

The word sequence “Let’s think step by step” before a completion answer dramatically increases the accuracy of the answer by GPT-3.

I tried it out with the obvious prompt “What is the meaning of life?” and have got a very elaborated - and wise - answer!

Read more: Prompt Design (GPT-3): “Step by Step” | by Merzmensch | Merzazine | Jun, 2022 | Medium

25 Likes

Thats a good idea and works well, thanks for that

This is a super interesting trick. It even seems to work for stranger word problems, like the following:

But it didn’t get the correct answer right away. It took some rewording of the question (the word “total” was key), and then it gave a few incorrect answers (4.5, 8) before reaching the right conclusion. It once incorrectly concluded 4.5 and then showed its correct math afterwards, so perhaps it would be more reliable if we could somehow persuade the model to complete all steps before reaching a final answer.

2 Likes

Thank you this saves me a lot of tokens, what was i doing before in the past was following the completion in the next prompt asking for it to explain in a simpler way.

Love this trick. Got a very nice response using it. I’ve certainly read much worse “relationship advice” from supposed human experts.


What should a shy teenager do, if they want to ask a classmate out on a date, but are afraid of rejection?
Let’s think step by step.

First, the teenager should consider whether or not they are actually interested in this classmate. If they are just asking for the sake of asking, or because they feel like they “should” ask someone out, that’s not a great foundation for a relationship. But if they have genuine feelings for this person and think there could be potential for a relationship, then it’s worth taking the next step.

The teenager should then spend some time thinking about what they want to say to their classmate. They might want to write it down or practice saying it out loud to themselves beforehand, so that they feel more confident when the time comes.

When they’re ready, the teenager should approach their classmate and say something like, “Hey, I was wondering if you’d want to go on a date with me sometime.” It’s important to be direct and honest - beating around the bush will only make things more awkward. The worst that can happen is that their classmate says no; in that case, the teenager can simply move on and focus on finding someone who does want to date them.

2 Likes

Thanks. Adding this to my toolset of console commands.

Have you had any luck generating poetry using this method?

I tried to write a poem,
But all my words were wrong.

I asked GPT-3 for help,
But it just spat out gibberish.

I gave it a title and components,
And I still couldn’t make it work.

In the end, all I had was this failed poem.

2 Likes

I love this concept. Thinking of a prompt as working memory is really interesting. I know the symbolists really complain that GPT-3 does not really understand what it is writing because it doesn’t build mental models so it can understand the relationships, dependancies, restraints, ect. of the sentences it generates.

So I wonder if GPT-3 could build a “mental” model using this step by step “reasoning” and then refer to it while rewriting a statement with a more plausible construct.

1 Like

I wrote a poem and got it corrected . Now I can’t get to how I did it

This is an interesting idea!

1 Like