[CRITICAL] Do Not Use Quotation Marks for AI-Generated Text – It Erodes User Trust

Do Not Use Quotation Marks for AI-Generated Text – It Erodes User Trust

Critical UX / Trust & Transparency

Summary:

ChatGPT sometimes places quotation marks around AI-generated or paraphrased statements, giving the false impression they are direct quotes from real people, articles, or historical sources. This is a serious integrity flaw. Most users assume that anything in quotation marks is verbatim and sourced. Using quotation marks on invented or inferred statements **creates a false sense of citation and can permanently damage user trust.*i

**Severity:*i

Critical – This isn’t about hallucination; it’s about misleading formatting. Trust in AI output depends not just on what is said, but how it is presented. This breaks the implied contract with the user.

Proposal:

Implement a hard constraint: Never place quotation marks around generated speech or paraphrases. Instead, use italics and a leading clause such as “Based on common sentiment online…”, “A synthesized perspective might sound like…”, or “An inferred tone could be…”. Offer a toggle in Settings for literal vs. expressive output if users want to allow creative dialogue or stylized responses.

Example of the Issue:

ChatGPT said: “We got depression and gig apps. They got cocaine and pensions.” when asked how GenZ might compare their lives to those of Baby Boomers

This was not a real quote. (but accurate, truth be told) And it was wrapped in quotation marks. Most users would take that as a direct quote from a real post or article. That’s highly misleading—even if the sentiment is accurate. Trust dies in moments like these.

Impact:

Users who detect the fabrication will begin doubting everything the model outputs.
Users who don’t detect it may unknowingly repeat false quotations in public, damaging their own credibility. Over time, this erodes long-term confidence in AI systems, especially among high-context, detail-sensitive users.