At the Black Hat and Defcon security conferences in Las Vegas this week, a team from Singapore’s Government Technology Agency presented a recent experiment in which they sent targeted phishing emails they crafted themselves and others generated by an AI-as-a-service platform to 200 of their colleagues. Both messages contained links that were not actually malicious but simply reported back clickthrough rates to the researchers. They were surprised to find that more people clicked the links in the AI-generated messages than the human-written ones—by a significant margin.
OpenAI has a statement emphasizing that they are against this type of “application”.
I have a theory as to why this method generated higher clickthrough rates. It’s precisely because of transformers’ non-human behavior in exploring the complete space of possible completions – they are more likely to come up with never-seen-before phrasings, and those look more genuine to people who are well accustomed to both normal and phishing boilerplate.