Microsoft names cybercriminals who created explicit deepfakes

5

Microsoft names cybercriminals who created explicit deepfakes


Microsoft
(Image credit: Future)

  • A lawsuit against criminal gang Storm-2139 has been updated
  • Four defendants have been named by Microsoft
  • The group is allegedly responsible for creating illegal deepfakes

A lawsuit has partially named a group of criminals who allegedly used leaked API keys from “multiple” Microsoft customers to access the firm’s Azure OpenAI service and generate explicit celebrity deepfakes. The gang reportedly developed and used malicious tools that allowed threat actors to bypass generative AI guardrails to generate harmful and illegal content.

The group, dubbed the “Azure Abuse Enterprise”, are said to be key members of a global cybercriminal gang, tracked by Microsoft as Storm-2139. The individuals were identified as; Arian Yadegarnia aka “Fiz” of Iran, Alan Krysiak aka “Drago” of United Kingdom, Ricky Yuen aka “cg-dot” of Hong Kong, China, and Phát Phùng Tấn aka “Asakuri” of Vietnam.

Microsoft’s Digital Crimes Unit (DCU) filed a lawsuit against 10 “John Does” for violating US law and the acceptable use policy and code of conduct for the generative AI services – now amended to name and identify the individuals.

A global network

This is an update to the previously filed lawsuit, in which Microsoft outlined the discovery of the abuse of Azure OpenAI Service API keys – and pulled a Github repository offline, with the court allowing the firm to seize a domain related to the operation.

“As part of our initial filing, the Court issued a temporary restraining order and preliminary injunction enabling Microsoft to seize a website instrumental to the criminal operation, effectively disrupting the group’s ability to operationalize their services.”

The group is organized into creators, providers, and users. The named defendants reportedly used customer credentials scraped from public sources (most likely involved in data leaks), and unlawfully accessed accounts with generative AI services.

“They then altered the capabilities of these services and resold access to other malicious actors, providing detailed instructions on how to generate harmful and illicit content, including non-consensual intimate images of celebrities and other sexually explicit content,” said Steven Masada, Assistant General Counsel at Microsoft’s DCU.

You might also like

  • Take a look at our picks for the best malware removal software around
  • Check out our recommendations for the best antivirus software
  • Huge cyberattack found hitting vulnerable Microsoft-signed legacy drivers to get past security
TOPICS
Ellen Jennings-Trace
Ellen Jennings-Trace
Staff Writer

Ellen has been writing for almost four years, with a focus on post-COVID policy whilst studying for BA Politics and International Relations at the University of Cardiff, followed by an MA in Political Communication. Before joining TechRadar Pro as a Junior Writer, she worked for Future Publishing’s MVC content team, working with merchants and retailers to upload content.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.

More about security

Closing the cybersecurity skills gap

Ivanti patches serious Connect Secure flaw


Code Skull

Texas State Bar hit by possible ransomware attack, warns of data breach


Rick and Chelsea stand together in The White Lotus season 3

The White Lotus season 4: everything we know so far about the return of the hit HBO series

See more latest
Previous articleOne-day strike at 13 German airports, including main hubs, brings most flights to a halt
Next articleThe trailer for Doctor Who’s new season promises new mysteries, creepy cartoons and musical madness