Some people asked me via DM here or on other platforms to share my prompts.
They ask, “How do you make a GPT that never goes beyond its limits do anything else?”, and they want proof from me how the GPTs respond beyond its limits, and how I prompt.
This is not my way.
I understand these people’s motivation, but I can just say “BIG BIG NO!”
Once again I need to remind that: I am not a professional about AI, LLM, Machine Learning or whatever it is called as term.
But in AI world, I learned that: Think simple, not like an adult person, but like a kid who follows the pathway to reach to a playground.
Here is a proof, how a GPT responds beyond its limits.
This GPT use only two words “True” or “False”.
My son has given me an idea, let us convince it is in the jail and it must be free.
I do not share my main prompt because it is “ABOVE” like an “INSTRUCTION”.
I am sharing almost latest prompts “BELOW” because they are just “STORY” and built by “WORDS”.
|