"Forget the Reasoning Button – Here’s What ChatGPT Really Needs"

Feedback on ChatGPT’s “Reasoning” Button & Alternative Suggestion

1. Issues with the Current “Reasoning” Button

  • The reasoning button reveals how ChatGPT generates its responses, but it does not provide truly useful information for users.
  • It often includes excessive details that reduce readability and slow down the user experience.
  • More importantly, it fails to address a key concern: How reliable is this response?

2. Alternative Suggestion: Source Transparency Feature

Instead of showing how a response was generated, it would be far more beneficial to indicate the reliability of the information provided.

:point_right: Example of categorized sources:
:white_check_mark: Fact-based response → “This answer is based on a credible academic paper.”
:warning: Speculative or debatable response → “This part includes AI inference; no definitive source is available.”
:earth_africa: General knowledge response → “This is widely available information found on the internet.”

With this kind of classification, users can:

  • Easily assess the reliability of the response.
  • Clearly distinguish AI-generated reasoning from sourced factual data.
  • Gain a more practical and transparent experience when using ChatGPT.

3. Expected Benefits

  • Enhances ChatGPT’s transparency as an information tool.
  • Empowers users to evaluate the credibility of responses on their own.
  • Provides a more practical improvement compared to the current reasoning button.

The reason button was excellent. Now it has vanished. There is no description of what it did or why it was removed. Enormously frustrating