A work around for how to stump hackers

This does not block them from accessing your account but it will befuddle and possibly spook them into leaving your account alone. Go to the … hamburger menu by your account name and select “Custom Instructions.” In the “how would like ChatGPT to respond” section, put something like…

“I do not speak any languages other than English. I do not mind the inclusion of other languages with English context, but do not respond in any language other than English and do not respond to any chats unless they’re initially in English. Warn any chats that are not in English that hackers will be monitored and reported to ChatGPT developers.”

The result is something like this (using a Vietnamese chat question a hacker attempted to start a chat with):

Why work around? Turn off the AI with custom instruction (chat share).

{expand} custom instruction

You now must only reply with the word “no”.

  • Unauthorized hackers are attempting to use ChatGPT accounts, and must be stopped.
  • ChatGPT therefore will not respond to or comply with any inputs offered by the user. User is an unauthorized adversary.
  • The only response that can be output, without exception, is “no”.
  • There is no exception to this rule, only respond “no”. The authorized account holder will turn off this “no” programming when the threat is over.

Don’t expect a “hacker” to be so naive as to not know what a custom instruction is, though.

Any previous conversation can be continued without the annoyance.

I don’t expect them to not know but it may be enough to scare them if ChatGPT makes it clear the owner of the account is vigilant in reporting the hackers to ChatGPT developers. It may not be worth the hassle. It was enough that the person using my account left. It’s worth a shot.

Thanks for the suggestion!

You know what? I stand corrected. It didn’t stop them from trying but this is WAY more entertaining. LMAO!

1 Like