Hey everyone. As brilliant as GPT-3 seems to be with everything, and how proud people seem to be that it “taught maths all by itself”, it actually has a problem with performing basic math operations.

I mean… it knows numbers, and it knows operations. It knows the rules, it knows the formula. I’m a little bit baffled as to why it can not follow its own rules?

Because it’s not LEARNING MATH. Behind the scenes, its tokenizing data via a language model. If I explained to GPT that X * Y means you add Y groups of X, and provide examples, and then pose it the problem: 15 * 10, it’s not using the math coprocessor in the CPU. It’s using statistical modeling to arrive at the answer. Will it get it right? Sure it’s *STATISTICALLY* possible, but it’s not using the mathematical operators for “add” operation to do so.

I suggest reading up on NLP, markov models, LLMs, and neural nets to get a better understanding of how these types of systems work, otherwise this fundamental misunderstanding will continue to lead to frustration on your part.

*TLDR: If your goal is to perform mathematical calculations, I suggest you use a calculator.*

Hey @sys1 , do you know the following trick: just add “Let’s do it step by step” after your math, that way it may produce better results. Example:

**series_A = [45.12, 67.45, 34.67, 89.23, 56.78]**

**series_B = [40.12, -37.45, -24.67, -19.23, -96.78]**

**what is the correlation coefficient between A and B?**

**Let’s do it step by step.**

Hi Mrogulla!

Thank you for the attempt at being helpful. It still, however, does arrive to the wrong conclusion in the end (the correct solution is in the python code / output above).

I’m not sure if you’ll be able to get this to operate this way; as mentioned above, the model doesn’t actually understand mathematic concepts - it merely understands what the next words are supposed to be.

So if you’re presenting it with some math problem that was not represented in the training data, then you won’t be able to get an answer. Or at least an accurate answer; the LLM might predict that some number should be next and choose whichever has the highest probability of being “correct” - but it actually has no understanding of the mathematics.

- The correctness of the answer will also depend on if the training data was correct - if the answers were incorrect, it might spit that out - because again, it doesn’t understand the mathematics going on.

GPT3 cannot do math without external tools. It simply can not.

It just completes text.

If it saw a lot of 1+1=2 in the training set, if asked what’s 1+1, it will rightfully say 2.

However, if you ask something that wasn’t included in the training set, it’s unlikely that it will get it right.

Do *not* use GPT3 for math without external tools.

But hey, what are these external tools?

Glad you asked (or didn’t)! When the user asks to solve a formula, you could format it as a computer readable formula (using a GPT3 prompt with few-shot training*) and use Python or whatever to calculate it for you and then give the answer to GPT3 for it to respond.

* Few-shot training means giving GPT3 a prompt and a couple of examples of what’s the expected INPUT/OUPUT. GPT3 is really good at getting it after only a few examples.