Microsoft names cybercriminals who created explicit deepfakes
By
Ellen Jennings-Trace
published
3 March 2025
Members of the “Azure Abuse Enterprise” have been named by Microsoft

- A lawsuit against criminal gang Storm-2139 has been updated
- Four defendants have been named by Microsoft
- The group is allegedly responsible for creating illegal deepfakes
A lawsuit has partially named a group of criminals who allegedly used leaked API keys from “multiple” Microsoft customers to access the firm’s Azure OpenAI service and generate explicit celebrity deepfakes. The gang reportedly developed and used malicious tools that allowed threat actors to bypass generative AI guardrails to generate harmful and illegal content.
The group, dubbed the “Azure Abuse Enterprise”, are said to be key members of a global cybercriminal gang, tracked by Microsoft as Storm-2139. The individuals were identified as; Arian Yadegarnia aka “Fiz” of Iran, Alan Krysiak aka “Drago” of United Kingdom, Ricky Yuen aka “cg-dot” of Hong Kong, China, and Phát Phùng Tấn aka “Asakuri” of Vietnam.
Microsoft’s Digital Crimes Unit (DCU) filed a lawsuit against 10 “John Does” for violating US law and the acceptable use policy and code of conduct for the generative AI services – now amended to name and identify the individuals.
A global network
This is an update to the previously filed lawsuit, in which Microsoft outlined the discovery of the abuse of Azure OpenAI Service API keys – and pulled a Github repository offline, with the court allowing the firm to seize a domain related to the operation.
“As part of our initial filing, the Court issued a temporary restraining order and preliminary injunction enabling Microsoft to seize a website instrumental to the criminal operation, effectively disrupting the group’s ability to operationalize their services.”
The group is organized into creators, providers, and users. The named defendants reportedly used customer credentials scraped from public sources (most likely involved in data leaks), and unlawfully accessed accounts with generative AI services.
“They then altered the capabilities of these services and resold access to other malicious actors, providing detailed instructions on how to generate harmful and illicit content, including non-consensual intimate images of celebrities and other sexually explicit content,” said Steven Masada, Assistant General Counsel at Microsoft’s DCU.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
You might also like
- Take a look at our picks for the best malware removal software around
- Check out our recommendations for the best antivirus software
- Huge cyberattack found hitting vulnerable Microsoft-signed legacy drivers to get past security

Ellen has been writing for almost four years, with a focus on post-COVID policy whilst studying for BA Politics and International Relations at the University of Cardiff, followed by an MA in Political Communication. Before joining TechRadar Pro as a Junior Writer, she worked for Future Publishing’s MVC content team, working with merchants and retailers to upload content.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.

Ivanti patches serious Connect Secure flaw

Texas State Bar hit by possible ransomware attack, warns of data breach

The White Lotus season 4: everything we know so far about the return of the hit HBO series










-
1The White Lotus season 4: everything we know so far about the return of the hit HBO series
-
2ICYMI: the week’s 7 biggest tech stories from the Nintendo Switch 2 launch to Microsoft turning 50
-
3Hurry! The latest iPhone 16 falls to its lowest-ever price at Amazon
-
4I reviewed Ricoh’s wireless touchscreen portable monitor – it’s sleek, smart, and frustrating
-
5Fabless chip startup backed by multi-billion Indian company wants to build a $10bn fab in India before 2027