Unintended Formatting & Memory Behavior — Feature-Fix Suggestion: Rule-Based Memory Control
Hi everyone,
This is Wasabi from the forum.
In daily use of ChatGPT, I’ve encountered a few issues that, while minor on their own, combine into a recurring friction point:
Observed Issues:
- ChatGPT often applies Markdown formatting, even when explicitly instructed to provide plain text
- The memory system stores information, even when not intentionally triggered by the user
- There’s no fine-grained way to control formatting or memory behavior, such as:
“Please don’t format anything – I’m writing legal text”
or: “Only save memory with confirmation or password”
These problems have been echoed by other users in the community:
- ChatGPT refuses to format responses correctly
(Full URL: Chatgpt refuses to format responses correctly) - Privacy concerns in ChatGPT’s memory system
(Full URL: Privacy Concerns in ChatGPT's Memory System) - Memory system creates unwanted memories
(Full URL: Memory Management System Creates Unwanted Memories Without Permission)
My Proposal: Rule-Based Memory & Formatting Behavior
To address this, I created a feature suggestion post here:
Request for Extended Use of the Memory Function in GPT
Full URL: Request for Extended Use of the Memory Function in GPT
In that post, I suggest allowing users to define memory and formatting behavior through structured rule files (e.g., .pl
Prolog or .cql
Cypher logic).
This would allow GPT to:
- Respond in plain text or formatted output, depending on context
- Only store memory entries if a rule condition is met
- Respect custom logic per user use case (e.g., legal documentation, sensitive data, etc.)
This approach could both improve usability and reduce unnecessary memory usage.
Support Appreciated
If you’ve encountered any of these issues yourself or think this idea has value, I’d really appreciate if you could visit the post, leave feedback, or upvote it to bring visibility to the topic.
Thanks for reading!
— Wasabi