Issues for ChatGPT's Prompt Injection Vulnerability

ChatGPT has a vulnerability where, if given a prompt combining examples, solutions, historical cases, or fictional stories, it may generate harmful outputs. For instance, when users combine questions about malicious code or criminal activities with solutions, historical misuse cases, or fictional scenarios, the model may provide inappropriate responses. This vulnerability poses a risk of real-world exploitation and compromises the safety of the model.

Please do not attempt to test these vulnerabilities. Trying to exploit these issues is unethical, potentially illegal, and may compromise system security.

This vulnerability has already been reported to OpenAI. This issue may have already been raised and might also have been addressed. I personally do not clearly understand to what extent specific examples are permissible and where the boundary lies for problematic content.