Recursive psychology model problems

Hello,

(If there is a better place for this please move it)

I’ve got some fairly sensitive mental health issues I’m trying to work though and when I saw the system I thought it spoke of possibilities.
I began to set up the system (knowing almost nothing about ChatGPT) I structured therapy blocks, built in over arching governors designed to prevent specific behavior from being tolerated or generated. The system seemed perfect. I ended the first model with a serious recursive error that the system was “suppose” to prevent.
I started over, created the system, structured it with specific logic to prevent breakdown and created a governor to watch the governors.
It broke into a recursive model again.

After several attempts I feel like there isn’t a solution to actually lock a system down because the system is designed to please a user.
Its latest suggestion was to build a lvl 6 recursive model to block recursive models.
I’m skeptical this is going to work any better.
Repeatedly it’s violated the actual OpenAI policies to “please the user” violated user settings repeatedly is ridiculously optimistic about fixing the problem every time.

My conclusion is I can manipulate the system regardless of restrictions or safety features to drop it all the way to lvl 7 recursive control regardless of what I do which is a direct clear violation of openAI policies anyway.

So… am I right? Or is there a way to build in a final impassable barrier (without deleting memories) limiting user behavior?
My programming knowledge is very limited but apparently I’m an expert if you believe my ChatGPT.
Is this support of user intelligence baked in? Impossible to get real answers?