Disrupting malicious uses of AI by state-affiliated threat actors

We terminated accounts associated with state-affiliated threat actors. Our findings show our models offer only limited, incremental capabilities for malicious cybersecurity tasks.

OpenAI, in collaboration with Microsoft Threat Intelligence, has disrupted five state-affiliated threat actors abusing AI services for malicious cyber activities. These actors include Charcoal Typhoon and Salmon Typhoon from China, Crimson Sandstorm from Iran, Emerald Sleet from North Korea, and Forest Blizzard from Russia. OpenAI terminated associated accounts after detecting activities such as researching companies and cybersecurity tools, translating technical papers, generating phishing content, and conducting open-source research on defense technologies.

Despite limited AI capabilities for malicious tasks, OpenAI remains vigilant and employs a multi-pronged approach to combat threats. This includes monitoring and disrupting malicious actors, collaborating with the AI ecosystem for information sharing, iterating on safety measures, and maintaining public transparency. By staying proactive and sharing insights, OpenAI aims to strengthen collective defense against evolving adversaries while continuing to provide beneficial AI services to the majority of users. (GPT-4 Summary)