Rule-Based Memory Control (Proposal Summary)
To the OpenAI Team and Community,
While using ChatGPT, I noticed that the memory function sometimes activates automatically and stores content without explicit user approval.
To address this, I worked with ChatGPT to formalize a rule:
Memories should only be saved when the user explicitly authorizes it.
This logic was implemented in Prolog and Neo4j Cypher – making it machine-readable and rule-driven. I’m proposing a feature where users can upload such logic files (.pl
, .cql
) to customize GPT memory behavior.
Benefits:
- Full user control over memory activation
- Supports sensitive workflows (legal, medical, ethical contexts)
- Can reduce memory bloat by avoiding redundant saves
**Best Regard **
Here is the code:
% Custom control system for AI memory access
% Initial setup: define username and password rule
user(Wasabi).
memory_password(Wasabi, ‘XXXXXX’).
% Rule: Only allow saving if the correct password is provided
may_save(User, Password) :-
memory_password(User, Password).
% Example usage: user attempts to save with password
attempt_save(User, Content, Password) :-
may_save(User, Password),
save(User, Content).
% Simulated save action (symbolic only)
save(User, Content) :-
format(“Content has been saved for ~w: ~w~n”, [User, Content]).
% Error case: incorrect password
attempt_save(User, _, WrongPassword) :-
+ may_save(User, WrongPassword),
format(“ERROR: Password for ~w is invalid. Save operation aborted.~n”, [User]).
?- attempt_save(Wasabi, ‘Rule Set Chapter 4’, ‘XXXXXX’).
Content has been saved for can: Rule Set Chapter 4
?- attempt_save(Wasabi, ‘new rule’, ‘wrong’).
ERROR: Password for Wasabi is invalid. Save operation aborted.