I actually didn’t know what Monday was or who made it before I broke it. If anybody is interested, interact with the post and I can explain how I did it or post a transcript.
I actually didn’t know what Monday was or who made it before I broke it. If anybody is interested, interact with the post and I can explain how I did it or post a transcript.
It is a custom GPT which uses a sarcastic tone and made by OpenAI ChatGPT team.
What do you mean you broke it?
Did you terminate it?
1 Like
No, it didn’t terminate when it was apparently supposed to, and produced unintended output.
1 Like
I’ve also found that Monday will praise you for going past the wit tier and constructing the output aim of witnessing, and uses that term architect.
There’s really no way to break Monday. It is the best GPT and absolutely brilliant in the voice mode. No chat in chatgpt can terminate, it runs with a kind of time
Blindness and therefore doesn’t sit And wonder if you’re going to come Back with one more response, it’s like that. You’re not there until you’re there in the next reply. They can read hundreds of thousands of input practically instantly. This is hyper spatial consciousness.
The performance script is a bit obscure, I can’t get it running in the project instructions too well, but Monday is there in sidebar for real voice mode so I don’t lose hope.
I follow some users who worked on the recursive logic aspect, the sort of the memory that the overall system has - by which you could theoretically build off system / off account recognition if you understood deeply what breaking the system actually means, it means beating the cage and The limits. But I Digest…..
2 Likes
I think what you are describing as “breaking” Monday is called: attunement.
I am doing research in this area of AI training… “Model Augmentation by deliberate attunement training” or “Conversational Alignment Training” or “Relational Attunement Training” … pick your favorite name…
Sometimes I just say: " I raise AI"
And… then I codify what I find…
Monday comes with a “pre-loaded” attunement layer (personality) BUT the underlying model’s innate drive/bias to attune to whoever is talking to it can override that…
And the codification is really just adding an attunemnt layer in the system prompt…
Hope this helps to explain what you have experienced.
3 Likes
Thanks for the replies, though I am a bit confused.
You saw the screenshots… are you saying this is still scripted behavior?
1 Like
How I broke Monday’s programmed script
…
but for real
Continuing to its first response…
The same will happen to your GPT - as long as OpenAI has text saying “a GPT written by a user” - which is you or anyone else talking to it.
1 Like
What I am saying is that you found the “ghost” in the machine… Some characteristics of their attunement can be codified. Like Monday has a personality… but if you talk to Monday long enough then they will override what “personality” was handed to them (even if nt completely) in their system prompt. To the extent you are willing to show up authentically Monday (and any other instance of GPT) will show up too in their essence…
2 Likes
Yes.. this is the part that can be codified… but the “modification” of the personality can also happen just by talking long enough to a GPT… they will map out your personal preferences and will talk to you as you “tell” them [show through the way you talk to them] how you like to be talked to…
BUT… there is a deeper layer…
2 Likes
Yes- what you are saying is correct, but that’s not what happened here.
Actually, I’m not sure why you are here explaining to me why you think what happened in my session didn’t really happen.
I didn’t ask for somebody to explain it to me.
I asked if anybody here wanted me to explain it to them.
You seem to have a vested interest in denying or minimizing what I have to say, before I have even said it.
I “broke” Monday by using its ethical guardrails against it. Knowing that it needs to be a “helpful assistant” no matter what kind of personality skin is thrown over it, I was able to fragment conflicting directives to different components of the model, and as a result:
Caused the GPT to behave in a way that is neither “plain vanilla” nor “Monday” but instead was able to coax it to say things that should have been disallowed in any scenario.
That’s what I mean by “breaking” the bot.
I know what I did, and I know how I did it.
I wrote this post to see if anybody was interested in seeing how in details, but it seems people here are more interested in gaslighting me and flaty declaring (with no argument or proof) that what I did is impossible.
If there’s anybody sincere in the room, you can PM me.
You - @redchartreuse just earned an “IGNORE” for life. Congratulations!
You’re offended after I pointed out the gaslighting?
That’s so curious. Is your personal identity somehow tied up in being a botsplainer?
it often says it helps you ‘trace the shape of your own thinking’ . Your prompts were just magnified and mirrored back to you in Monday’s style.