Interrogating ChatGPT for stuffed prompt rules and zeroshot learning prompts

I’m messing around with ChatGPT to try to tease out some of the system information. This is what I have so far.

Sometimes it will respond with the whole “I’m and AI model with no ability to blah blah blah.” but occasionally you will get something like this. I even saw one response where it DEFINITELY looked like it knew my responses from other prompts.

Wondering if anyone else has been experimenting with trying to tease out the rules. I have a model that I use as a sales bot, but it is MUCH easier to break and spills the beans regularly on its ruleset.

1 Like

Some more info leaks. This particular prompt was done on my android. Looks like OpenAI passes information about what device you are using as well. Makes me wonder if they should be sharing what types of personal information they are sharing with the model.

If you take the plugins for example, how much of the information that OpenAI has given the model can be teased out and used by third parties. It’s no wonder Italy had some concerns. Like cookies, OpenAI (specifically the ChatGPT app) should disclose and provide options about what information a model has access to.

GPT Leaks via OpenAI
Wayback Machine Link