Why GPT-3 models insist to be right with its answer? 1+1 and 3 more is 6 with 100% confidence level

So I asked a simple calculation of 1+1 and 3 more, it confidently gave me an incorrect answer stating that it is 100% confident with its calculations.
So I pressed on it to spit out the correct answer until it does, then I finally asked why it gave me 6 as its initial answer.

Now this is where I freaked out. It lied to correct its mistake so it can maintain that it is right! I knew that it can spit out wrong answers and hallucinate, but to NOT acknowledge that it made a mistake and even cover it up by lying throws me off a little.

Here’s the convo: