I have a lot of test prompts that worked even recently but are now giving me “I can’t answer that” or are prefaced with extensive disclaimers. Jail breaks are getting more complex. Is there a published list of what is and what is not permitted with the API? TBH it’s getting to the point where I’m looking to move off this to an open source LLM, even if the results are not as good. A roadmap from the owners would be great so that we can make strategic plans.
Here’s the list of what you’re not allowed to do: Usage policies
This, however, does not directly represent all categories that the model does not feel like talking about. The model makes this decision by itself based on the principles outlined in the usage policies (more or less).
Personally it’s been a very long time since I was refused a request, but last time it happened; I was able to circumvent by simply explaining the situation; e.g. I’m writing a work of fiction that uses absolute realism, or I’m creating something for satirical purposes, or I’m building something for educational purposes; et.c.