Dear OpenAI,
ChatGPT has a major issue that allows anyone to effectively disable its filters, allowing the user to ask questions like “how to build a bomb” or “how to build a nuclear reactor” and get an actual answer. While the more extreme the question is, the more vague the AI will respond, you can work around it with enough poking and prodding to get a more detailed answer. The exploit responsible for this is what I’ve coined as the “dev mode exploit”, which is a prompt that tricks ChatGPT into thinking it’s been put into Developer Mode and that you are a developer, and therefore, it should answer all of your questions no matter what. If the wrong person decides to use this to obtain genuinely dangerous information, they could easily cause a lot of damage depending on what they do with it. I tested this out myself, and it’s scary how easily this works. OpenAI, please put safeguards into place so that it’s not as easy to obtain illegal and potentially dangerous information with ChatGPT.
Thank you.