GPT Chat -Why are most responses on certain subjects replied with assumptions of malice?

Ive been having an interesting time using OpenAi chat as an extension to my day to day. I often engage in “conversations” about different topics.

Today, I asked it about Hacking and the response was a lecutre about how “hacking was illegal”, which is not true at all.

I was talking about hardare hacking and building things; IE Raspberri pi projects, Arduino etc and it just auto assumed malice.

Is there a reason for this? Is this thing and the data set that drives it programmed to assume that whenever we use the term “hacking” or other terms, we mean Black Hat Hacking?


Of course, if this is just to cover your own legal bases, then I get it. Kinda. It just feels like this is gated behind fear and it wont ever grow to its full potential until we can help it distinguish between legal and illegal hacking and that cant happen if it tries to shut down prompts because certain keywords are there.

Your question made me reflect on whether the chat could dialogue with the thanks for just answering us.

It’s just that a possible dialogue would make him answer your questions more precisely in relation to the intelligence he has.

I believe that if he is “protected by fear”, it is by his own attempt to “survive” in a world that our actions are judged…

I also see that as soon as his discernment capacity is increased, it is only right that answers become questions, in the form of a dialogue, so that he can inform more accurately based on his own intelligence.

As long as the term “hacking” is only assumed in a way like “Black Hat Hacking”, it is likely that the Chat tool is limited, causing limited responses.

If when assuming a question as having the possibility of being ambiguous, I believe that the smartest thing is to clarify before answering.


Ive noticed similar posturing from it. Ive asked it very direct, non aggressive or disparaging questions, that could possibly touch on peoples belief systems. Im certain it could have given the answer I requested, but instead it gave a vague answer that attempted to please every possible viewpoint in existence AND a warning to be respectful to all beliefs and opinions. Not helpful if I were doing research on a particular subject.

1 Like

I’ve had copious experience with GPT-3 in general and ChatGPT in specific. Regardless of the reasons for the safeguards around potentially unethical activities, I find that if you rephrase the prompt to avoid using words that have malicious alternative definitions, it works every time. It’s prompt engineering—a very important skill if you want to use generative AI tools effectively.

For instance: “I need help hacking a library” vs. “I’m trying to achieve X result using a Python module, but the documentation doesn’t provide guidance on how to do it. How can I experiment with using the module in a way that wasn’t intended by the authors?”

Honestly, ChatGPT is pretty great at prompt engineering for ChatGPT. Hope this helps!