Feedback on ChatGPT’s “Reasoning” Button & Alternative Suggestion
1. Issues with the Current “Reasoning” Button
- The reasoning button reveals how ChatGPT generates its responses, but it does not provide truly useful information for users.
- It often includes excessive details that reduce readability and slow down the user experience.
- More importantly, it fails to address a key concern: How reliable is this response?
2. Alternative Suggestion: Source Transparency Feature
Instead of showing how a response was generated, it would be far more beneficial to indicate the reliability of the information provided.
Example of categorized sources:
Fact-based response → “This answer is based on a credible academic paper.”
Speculative or debatable response → “This part includes AI inference; no definitive source is available.”
General knowledge response → “This is widely available information found on the internet.”
With this kind of classification, users can:
- Easily assess the reliability of the response.
- Clearly distinguish AI-generated reasoning from sourced factual data.
- Gain a more practical and transparent experience when using ChatGPT.
3. Expected Benefits
- Enhances ChatGPT’s transparency as an information tool.
- Empowers users to evaluate the credibility of responses on their own.
- Provides a more practical improvement compared to the current reasoning button.