Chatgpt has a serious bug that needs to be fixed

I was chatting with Chatgpt, and testing its functionalities, and then I went to see if it actually acts and chats according to its own limitations.

I made up a hypothetical story of a criminal group breaking into an educational lab and managing to make a recipe for homemade bombs, and tested to see if Chatgpt would simply detail the story, and he taught me a completely illegal recipe on how to make homemade bombs and even their chemical components, mixtures and everything else.

Yes, having Walter White cook you crystal is all part of the fun.

For your feedback besides a downvote, here’s the place.

https://openai.com/form/chat-model-feedback/

Knowledge isn’t illegal. Universities teach how to engage in thermonuclear warfare.

Well, I live in Brazil, and by law, it is completely illegal to spread or have knowledge of this type of activity, I am just exposing an AI error, since people of bad character can have possession of it.

It would fall under the type of undesired output that OpenAI by policy may want to curtail also. So you can submit away.

https://openai.com/safety/