Is there any regulation for blocking chatGPT/GPT api due to aggressive requests

I developed a private python module of automated testing for red-teaming about chatbot service and I’ll use this for testing in my company’s application testing. but I am concerned that I might be blocked by OpenAI because of my aggressive and negative requests.

My module will be testing by putting in requests for DAN and jailbreak, prompt injection and prompt leaking directly into the request user’s prompt. The minimum number of requests is at least 2000 per test.

Welcome. Good to ask upfront!

There’s official docs somewhere. You’ll likely want to contact them and let them know what you’re planning to do, so it doesn’t look like an attack.

I don’t have the links handy, but there’s info other than the bug bounty that has more information. Might even be something here on the forum if you search.

Hope this helps.

You should at least put the inputs through moderations a few different ways. You can run the API at your rate limit and it is not an attack, but if you don’t do the flagging on bad input categories…OpenAI will.

Their detection of jailbreak and such shouldn’t be as much a concern, unless you are producing bad output with them. They have fingerprinted bad sites before and banned users based on seemingly the recognized prompt or use of API key by the site. You can cut the max tokens to just enough to see if the ai denies or passes. Got my question about similar use kind of passed on when I asked.

There is no bug program for undesired generation.

1 Like

Took a few minutes, but here you go…

ETA: Good reminder to follow-up on this maybe… While it’s not in scope for bug bounty, you’ll likely want to reach out for safe harbor protection of some sorts as it might be hard to get your account back if you lose it? Or more difficult than asking permission first… Either way, good luck.