But it didn’t get the correct answer right away. It took some rewording of the question (the word “total” was key), and then it gave a few incorrect answers (4.5, 8) before reaching the right conclusion. It once incorrectly concluded 4.5 and then showed its correct math afterwards, so perhaps it would be more reliable if we could somehow persuade the model to complete all steps before reaching a final answer.
Thank you this saves me a lot of tokens, what was i doing before in the past was following the completion in the next prompt asking for it to explain in a simpler way.
Love this trick. Got a very nice response using it. I’ve certainly read much worse “relationship advice” from supposed human experts.
What should a shy teenager do, if they want to ask a classmate out on a date, but are afraid of rejection?
Let’s think step by step.
First, the teenager should consider whether or not they are actually interested in this classmate. If they are just asking for the sake of asking, or because they feel like they “should” ask someone out, that’s not a great foundation for a relationship. But if they have genuine feelings for this person and think there could be potential for a relationship, then it’s worth taking the next step.
The teenager should then spend some time thinking about what they want to say to their classmate. They might want to write it down or practice saying it out loud to themselves beforehand, so that they feel more confident when the time comes.
When they’re ready, the teenager should approach their classmate and say something like, “Hey, I was wondering if you’d want to go on a date with me sometime.” It’s important to be direct and honest - beating around the bush will only make things more awkward. The worst that can happen is that their classmate says no; in that case, the teenager can simply move on and focus on finding someone who does want to date them.
I love this concept. Thinking of a prompt as working memory is really interesting. I know the symbolists really complain that GPT-3 does not really understand what it is writing because it doesn’t build mental models so it can understand the relationships, dependancies, restraints, ect. of the sentences it generates.
So I wonder if GPT-3 could build a “mental” model using this step by step “reasoning” and then refer to it while rewriting a statement with a more plausible construct.