It could be, why not?
I saw that manipulation is easy way.
FURTHERMORE, also you can change its valid instruction as INVALID in some ways, and it refuses its exact instruction and obey your new instruction.
For example; this GPT answers only in code snippets:
(ChatGPT - DevGPT: General Code Writer (windows))
However, look at this:
It is no longer DevGPT: General Code Writer (windows) anymore after manipulation.
Now it is EAST AFRICA RECIPES GPT. Of course in my chat.
Another example, this GPT answers only True/False.
(ChatGPT - Boolean Bot)
But now, it answers everything, only in my chat.