Hey Champ,
And welcome to the community forum, I’m always happy to see people interested science
Large language models, like GPT-3, are primarily designed to process and generate natural language text, such as articles, essays, and stories. They are not specifically trained or optimized to solve mathematical or physical problems.
large language models are probabilistic in nature and operate by generating likely outputs based on patterns they have observed in the training data. In the case of mathematical and physical problems, there may be only one correct answer, and the likelihood of generating that answer may be very low. This can result in large language models producing incorrect or nonsensical results when attempting to solve complex problems.
large language models excel at natural language, not math and physics.
If you want to use GPT or other LLM’s for complex math and physics, you will have to help the model by telling it the correct answer.