[Proposal] If AI is to become a real industry, trust must come before capability
I am an individual user, but if I were a business decision maker considering ChatGPT for professional use, I would likely cancel the contract within the first month under the current design, possibly even request a refund.
The reason is simple:
I cannot tell what the AI is actually referencing.
Today, AI is already:
- fast
- capable
- multifunctional
However, in business contexts, the most important requirement is not intelligence, but trust.
Specifically, the ability to explain decisions and place responsibility.
Regarding external file integrations such as Google Drive, there is a major gap:
even when integrations exist, users cannot clearly see
- which files were referenced
- whether they were referenced at all
This makes review, auditing, and accountability extremely difficult.
My proposal is intentionally minimal.
At the very beginning of each response, simply display a short declaration such as:
“Referenced files:
- File A (last updated: YYYY-MM-DD)
- File B (last updated: YYYY-MM-DD)”
This single line would immediately allow users to:
- verify assumptions
- detect reference errors
- review outputs responsibly
- treat the AI as a tool rather than a gamble
Society already accepts that AI can make mistakes.
What matters is whether humans can detect and correct those mistakes.
Without visible references, users are forced to rely on trust alone.
With visible references, trust becomes structural.
Focusing only on short-term feature expansion risks quietly losing users.
Once trust is lost, most users do not return.
If AI is meant to grow beyond a temporary trend and become a true industry,
the first thing to cultivate should not be performance, but trust.
This small change may seem minor, but it has the potential to create a large and lasting impact.