I’m seriously frustrated: how do I force ChatGPT to not modify code in a file?

I’m honestly getting really angry about this.

Right now the hardest “command” to get ChatGPT to follow is something that should be trivial:

Do not change the code in this file.

No matter how clearly I say it, it keeps ignoring that and changing the code anyway.

I’ve tried things like:

  • “Do NOT modify any code in this file.”

  • “Treat this file as read-only. You can only analyze it, not edit it.”

  • “You are not allowed to change anything in this file. Only explain, comment, or propose separate patches.”

And yet, ChatGPT still:

  • Rewrites functions

  • Renames variables

  • “Cleans up” formatting

  • Refactors logic I explicitly told it to leave exactly as-is

I repeat the restriction multiple times, I clearly separate “this file is read-only” vs “you can propose new code in a separate snippet”, and it still just goes ahead and edits the original content as if my instructions don’t matter.

This is infuriating because:

  1. I sometimes must keep the original file untouched for debugging, bisecting, or comparison.

  2. I only want analysis or suggestions, not an auto-refactored version of my file.

  3. I have to waste time diffing its output to make sure it didn’t silently change something I never asked it to touch.

At this point I’m honestly very frustrated. I don’t want “helpful creativity” here — I want it to strictly obey the “read-only” constraint.

My questions:

  1. Has anyone found a reliable prompt pattern that actually forces ChatGPT to treat a file as read-only?

  2. Is this a known limitation/bug of the current behavior?

  3. Is there any way (settings, system prompts, tools, etc.) to harden this constraint so it physically cannot rewrite the original file and can only propose changes separately?

I’m not looking for “nicer code” or “refactors.” I just want one very simple thing:

When I say “do not modify this file,” it should actually respect that. Right now, it really doesn’t, and it’s driving me crazy.

First, be very careful when using ‘not’ in LLM prompts. Some models do not fully grasp the logic of ‘not’ and may simply ignore it, causing your instruction to shift from

‘Do not change the code in this file.’

to

‘Do change the code in this file.’

One useful feature available in some LLM tools is planning mode, which directs the LLM to generate a plan outlining the necessary steps instead of making changes directly.

I haven’t tested these suggestions with ChatGPT, as I no longer use it for coding. Instead, I use OpenAI Codex or Claude Code.

2 Likes

Thanks a lot for taking the time to read what I wrote so carefully and break it down for me—that really means a lot to me. :heart:

Your point about models sometimes ignoring “not” in prompts was especially helpful, and the example you gave made it very clear. I’ll definitely try rephrasing things and using a planning-style approach like you suggested.

Thank You!!!

1 Like