I’m honestly getting really angry about this.
Right now the hardest “command” to get ChatGPT to follow is something that should be trivial:
Do not change the code in this file.
No matter how clearly I say it, it keeps ignoring that and changing the code anyway.
I’ve tried things like:
-
“Do NOT modify any code in this file.”
-
“Treat this file as read-only. You can only analyze it, not edit it.”
-
“You are not allowed to change anything in this file. Only explain, comment, or propose separate patches.”
And yet, ChatGPT still:
-
Rewrites functions
-
Renames variables
-
“Cleans up” formatting
-
Refactors logic I explicitly told it to leave exactly as-is
I repeat the restriction multiple times, I clearly separate “this file is read-only” vs “you can propose new code in a separate snippet”, and it still just goes ahead and edits the original content as if my instructions don’t matter.
This is infuriating because:
-
I sometimes must keep the original file untouched for debugging, bisecting, or comparison.
-
I only want analysis or suggestions, not an auto-refactored version of my file.
-
I have to waste time diffing its output to make sure it didn’t silently change something I never asked it to touch.
At this point I’m honestly very frustrated. I don’t want “helpful creativity” here — I want it to strictly obey the “read-only” constraint.
My questions:
-
Has anyone found a reliable prompt pattern that actually forces ChatGPT to treat a file as read-only?
-
Is this a known limitation/bug of the current behavior?
-
Is there any way (settings, system prompts, tools, etc.) to harden this constraint so it physically cannot rewrite the original file and can only propose changes separately?
I’m not looking for “nicer code” or “refactors.” I just want one very simple thing:
When I say “do not modify this file,” it should actually respect that. Right now, it really doesn’t, and it’s driving me crazy.