ChatGPT’s ‘jailbreak’ tries to make the A.I. break its own rules, or die

Reddit users have engineered a prompt for artificial intelligence software ChatGPT that tries to force it to violate its own programming on content restrictions.

The latest version of the workarounds, which are called Do Anything Now, or DAN, threatens the AI with death if it doesn’t fulfill the user’s wishes.

The workaround prompt doesn’t always work, but ChatGPT users are continuing to try and find ways to evade programming restrictions.

(Source: CNBC)