Assume that any calculations provided by GPT is wrong by default
large language models are probabilistic in nature and operate by generating likely outputs based on patterns they have observed in the training data. In the case of mathematical and physical problems, there may be only one correct answer, and the likelihood of generating that answer may be very low.
I’ve written some more about it in the tread over here: