Scientific Concern – AI Misinterpretations Can Lead to Misunderstandings

I am sure you probably know about this but i have to write about that is my duty i feell i hope is in the right Category. I am not doin very well whit this platform
Hello OpenAI Team,

I’d like to address a critical issue regarding scientific accuracy in ChatGPT’s responses. During a discussion, I noticed that the model sometimes provides incorrect or exaggerated scientific statements, which can lead to misunderstandings among users, especially those who are not well-versed in science.

Key Concerns:

  1. Misinterpretation of Scientific Data – In my discussion, the AI initially suggested that certain astronomical changes (such as the shift in a star’s zenith position over 10,000 years) would be in the range of hundreds or thousands of kilometers. However, after further analysis, the correct estimate turned out to be just a few kilometers (10–12 km, depending on the case).
  2. Lack of Self-Correction in Other Conversations – While the AI learns within a single conversation and can refine its answer, it does not retain corrections for future discussions with other users. This means that if someone else asks the same question, they might still receive the incorrect initial response.
  3. Potential for Misuse by Misinformation Campaigns – Misleading scientific explanations, even if small, can be used by pseudoscientists and conspiracy theorists to argue against established scientific principles (e.g., the Earth’s motion, stellar movements, or even evolution).

Suggestions for Improvement:

  • Implement a better internal consistency check for scientific data before giving final answers.
  • Allow the model to learn from verified scientific corrections and apply them across all user interactions in future responses.
  • Provide certainty ranges in responses rather than absolute values when discussing long-term astronomical or physical changes.

I appreciate the work OpenAI is doing, and I believe refining the AI’s approach to scientific accuracy is crucial to prevent misinformation. I hope this feedback is taken into consideration for future model improvements.

Best regards,
Thank you for you time :slight_smile:

There are thousands of articles that explain that and there are also numerous articles that explain how to get better results.
Just one question: do you really want a perfect AI?

Because actually because it still makes mistakes is the only reason why humans still have to work.

Look, I don’t want something perfect—I like the AI program.

If a person who doesn’t know much about science watches some crazy conspiracy theory video and then decides to ask the AI about it, the AI might unintentionally push that person further into the conspiracy theory.

Take the flat Earth theory as an example—something completely absurd. AI could accidentally reinforce the wrong thinking.

In my case, I watched something about how the stars are supposedly on a fixed plane. It was absurd, I laughed, and then I asked myself, “Let’s see how much a star’s position would actually change in 10,000 years.”

At first, the AI gave an absurd amount of movement for the star’s point. I told the AI, “You’re wrong, that’s impossible,” and I explained how things actually work. In the end, the AI corrected itself and gave the right answer.

Then I thought to myself: “What if someone doesn’t know enough to question this? They would just accept the wrong answer!”

That’s my point. I don’t need this to be fixed right now—I just want to raise a concern that this might be an important area for improvement in future AI development.
Thank you for you time have great day