Custom GPTs Mislead Users About Document Storage – OpenAI Needs to Address This Transparency Issue

OpenAI’s Custom GPTs allow users to upload documents for analysis, but there is a critical lack of transparency about how these documents are handled.

The Problem

  • When users upload a document, there is an implicit assumption that it will be stored and available for future reference.
  • In reality, Custom GPTs do not retain full documents. Only structured insights explicitly extracted at the time of upload are stored.
  • If an insight was not extracted, it cannot be retrieved later, and users must re-upload the document.
  • OpenAI does not warn users of this limitation, leading to false expectations and potential data loss.

Why This Is a Serious Issue

  • Users rely on the system to store documents for future reference, but that assumption is incorrect.
  • This misleading design prioritizes ease of use over transparency.
  • OpenAI has not provided a clear, upfront warning that documents will not be available after the session ends.

What OpenAI Needs to Fix

  • Clearly disclose document handling policies before users upload files.
  • Provide a warning that documents are not stored and that only selected insights are retained.
  • Offer an option for users to export extracted insights for long-term reference.

This is a serious design flaw that misleads users into thinking they can retrieve uploaded documents later. OpenAI needs to provide clear and explicit communication on this issue. Has anyone else experienced this problem? OpenAI, can you clarify this misleading design choice?