Microsoft has reported the misuse of its AI technologies by state-sponsored hackers from Russia, China, and Iran.

Microsoft collaborated with OpenAI to observe activities by groups linked to Russian military intelligence, Iran’s Revolutionary Guard, and the Chinese and North Korean governments, using advanced AI tools to refine their hacking strategies and craft convincing deceptive messages.

Russian groups believed to be associated with the national intelligence agency GRU leveraged AI to delve into satellite and radar technologies potentially relevant to military operations in Ukraine. 

North Korean hackers crafted content for spear-phishing campaigns targeting experts in the region, while Iranian operatives used AI to compose more persuasive emails, including an attempt to lure activists to websites under the guise of feminist advocacy.

Tom Burt, Microsoft’s Vice President for Customer Security, spoke of the findings, “Independent of whether there’s any violation of the law or any violation of terms of service, we just don’t want those actors that we’ve identified – that we track and know are threat actors of various kinds – to have access to this technology.” 

China responded via US embassy spokesperson Liu Pengyu, who denounced the accusations as unfounded and advocated for the responsible deployment of AI technologies to benefit humanity.

China has been collaborating with the US on AI safety behind closed doors, but perhaps out of sheer necessity rather than want or desire.

Microsoft and OpenAI suggest that AI tech is wielded similarly to how an average user might employ it. In other words, they didn’t identify any particularly unknown or next-level threats. 

And that’s half the point. AI democratizes fraud strategies and cybercrime, enabling those with a motive but little technical know-how. 

Microsoft is banning any AI application or workload hosted on Azure for these state-backed groups. The Biden administration recently requested that tech firms report certain foreign users of their cloud technology.

More info from the report

Microsoft and OpenAI’s latest research sheds further light on the evolving landscape of cyber threats in the age of AI. 

Microsoft has released a handful of these reports in recent months, including one on China and North Korea in September last year that discussed AI-generated propaganda. 

However, it must be said that AI threats are everywhere. Deep fake fraud is rising sharply in the US, UK, and other Western and European nations, the same as in the East. 

Microsoft is now tracking over 300 unique threat actors, including nation-state actors and ransomware groups, leveraging AI to enhance their defenses and disrupt malicious activities.

Microsoft is integrating LLM-themed tactics, techniques, and procedures (TTPs) into the MITRE ATT&CK framework and MITRE ATLAS knowledgebase to support response to AI-powered cyber operations.

AI cyber threats are on the rise, and the fact of the matter is, they’re international and hard to contain.