I’d like to bring up a critical feature request/bug: the ability to delete individual Chat Completions directly from the platform dashboard. Currently, once a Chat Completion is generated, it’s permanently stored and accessible for billing and monitoring purposes. This lack of deletion capability is a big red flag, especially from a data management and privacy perspective.
For developers and users alike, having control over stored data is essential—not just for compliance, but also for managing sensitive or redundant information. The inability to remove these records could lead to unnecessary complications in data organization and security concerns.
I urge you to consider prioritizing this feature. Implementing an option to delete Chat Completions would greatly enhance user control and trust in the platform.
Hey there! Definitely an important discussion—data control and privacy are key. I just wanted to clarify because there might be some confusion here.
If you’re referring to chat history (conversations in ChatGPT’s UI), you can delete them permanently by clicking on the chat and selecting ‘Delete.’ If that’s what you meant, you’re already covered!
If you’re talking about Chat Completions stored at the API level, OpenAI retains those for billing and monitoring purposes. Right now, there isn’t an option to delete them manually, but it’s worth exploring whether OpenAI could introduce that in the future.
A quick way to check: If you’re seeing records in your API usage logs that you’d prefer to remove, it might be best to review OpenAI’s data retention policies or reach out to support to confirm what’s possible.
If API-level deletion is what you’re looking for, I think this is a great feature request! More granular control over stored API interactions would definitely give developers more flexibility. Thanks for bringing this up—OpenAI takes user feedback seriously, so raising this could be valuable!
Glad that’s what you were looking for! You might also want to update the thread flair to API - Feedback instead of Bug, since OpenAI does track this data intentionally. That said, I totally agree—having more control over API-level deletions would be valuable for everyone.
Right, OpenAI does allow users to apply for zero retention if they meet the criteria, but I believe the OP’s point is different—this isn’t just about retention policies, it’s about having a built-in UI feature for users to manage and delete their API completions themselves without needing to request it manually.
A direct deletion tool in the API dashboard would give users more autonomy and transparency over their data, rather than relying on OpenAI to handle it on a case-by-case basis. It’s less about the backend storage policy and more about improving user control and workflow efficiency. I can definitely see how that would be valuable!
Would you also find it useful to have visibility into stored completions before deciding what to delete? That might be another way to improve OpenAI’s data management UI.
I think the main point of why it isn’t allowed to override the 30 day retention is that without a trusted history, they need to retain proof of potential abuse in case the user infringes some usage policy. Other than that, I agree that all proposed features would be very welcome.
That makes total sense—abuse prevention is important, and I see why OpenAI would need a retention policy for that reason. That said, I think the OP’s point is less about completely overriding retention and more about giving users a structured way to manage their data while still maintaining necessary safeguards.
A couple of potential solutions that could balance this:
Encryption-based retention: Instead of keeping logs fully visible, OpenAI could encrypt stored completions so they’re still available for verification but not accessible for human review.
Controlled deletion with conditions: Users could have the ability to delete API logs, but flagged completions (e.g., those that violate policies) could remain stored separately (and still encrypted). Summarized content for abuse flagging. Have AI not only encrypt, but have it summarize the information to have context for humans review.
Paid override for unpaid content: If content hasn’t been paid for, OpenAI could require final payment before allowing deletion, ensuring they still receive compensation.
Right now, the system offers no user-driven solution for this, which is what the OP is advocating for. I think there’s definitely room to explore ways to improve data control without compromising security.