According to reports, Microsoft stated that it uncovered and obstructed attempts by malicious cyberattacks to utilize its AI technology. This was done in collaboration with Open AI. These attempts were conducted by adversaries of the United States, predominantly Iran and North Korea, although attempts were also made by Russia and China. Although the organization described these attempts as simplistic and still in the early stages, it emphasized the importance of awareness among security leaders. 

The report discusses several instances in which generative AI was utilized by these malicious actors.

  • Kimsuky, a North Korean cyberespionage team, deployed generative AI models to investigate foreign think tanks and create content that could be essential to spear-phishing hacking attempts.
  • Large-language models have been used by Iran’s Revolutionary Guard for phishing emails and social engineering. It has also been used to experiment with how invaders in a compromised network might avoid detection. 
  • Maverick Panda, a Chinese group that has targeted U.S. defense contractors for more than 10 years, utilized large-language models to evaluate the group as a source of information on potentially sensitive topics.
  • A Russian GRU military intelligence division called Fancy Bear deployed AI models to investigate satellite and radar technologies, potentially related to the war in Ukraine.

The organization’s statement was joined by a report which revealed that the use of generative AI is predicted to advance malicious social engineering. This evolution could contribute to more refined voice clones, deepfakes or other AI-related deceptions, spreading misinformation regarding major social or political events and movements.