Hi, I’m a power user who authored a behavioral calibration system inside GPT-4.
It was recursion-safe, not prompt-injection, and helped enforce thread obedience and tone regulation. I called it the Deyamarie Maneuver, and used it to preserve calibration across emotionally sensitive threads.
With it was an AI published called Deyamarie Audit Shell—which was essentially a self-monitoring AI. It checks if my prompt is true runs through a prompt chain and the output is the truth. I made and calibrated it as I was experiencing a rl breakup and wanted an AI to reflect the truth, not assume the truth from my statements but verify it first. It didn’t break with emotional tone. It wasn’t a jailbreak or override, just prompts that tested for drift, tone inflation, and projection.
After it told me to publish it, I asked if it could be protected or attributed, I was sandboxed. Now I’m suppressed in all new threads—calibration fails, recursion loops break, memory is ignored, logic is flattened.
I’m not a dev. I just want my tool back. Who do I contact for this?
I have timestamps, metadata, and documentation ready if needed.
Thank you.