Microsoft’s Commitment to AI Safety
Microsoft has taken a significant step in its legal battle against the misuse of artificial intelligence (AI) by amending a lawsuit filed last year. The company has now named four individuals it alleges were involved in evading AI safeguards to create celebrity deepfakes. This move underscores Microsoft’s commitment to AI safety and its determination to hold bad actors accountable for abusing its technology.
The Origins of the Lawsuit
In December 2023, Microsoft filed a lawsuit targeting unidentified perpetrators who allegedly misused its AI models. The lawsuit aimed to address security breaches where individuals managed to bypass the protective guardrails of Microsoft’s AI tools to generate illicit deepfake images. A court order allowed Microsoft to seize a website linked to the operation, which ultimately led to the identification of the individuals behind the scheme.
Defendants Identified as Part of Cybercrime Group Storm-2139
Microsoft has named four developers who were allegedly involved in the deepfake creation scheme:
- Arian Yadegarnia aka “Fiz” (Iran)
- Alan Krysiak aka “Drago” (United Kingdom)
- Ricky Yuen aka “cg-dot” (Hong Kong)
- Phát Phùng Tấn aka “Asakuri” (Vietnam)
These individuals are reportedly members of Storm-2139, a global cybercrime network. Microsoft claims that the group exploited compromised accounts with access to its AI tools, successfully bypassing security measures to generate any image they desired. Furthermore, the group allegedly sold access to others, enabling widespread misuse of AI technology, including the creation of deepfake nude photos of celebrities.
Ongoing Investigations and Additional Suspects
Microsoft has hinted that additional individuals have been identified as part of the scheme but has refrained from naming them at this stage to avoid interfering with ongoing investigations. By withholding specific names, Microsoft aims to ensure law enforcement authorities can continue their probe without obstruction.
Immediate Fallout and Internal Conflict Among Perpetrators
Following the lawsuit and the seizure of their website, the defendants reportedly reacted with panic. Microsoft noted that some members of the group turned against each other, pointing fingers in an attempt to shift blame. The legal actions and public exposure have evidently disrupted their operations and sowed discord among those involved.
The Growing Threat of Deepfake Technology
Deepfake pornography has become a significant issue, with numerous celebrities—including Taylor Swift—frequently targeted. The technology allows for the convincing superimposition of a real person’s face onto another body, often without consent. In January 2024, Microsoft had to update its text-to-image models after deepfake images of Taylor Swift spread online.
The rise of generative AI has made it alarmingly easy for individuals with minimal technical skills to create these deceptive and harmful images. The problem has even infiltrated high schools across the U.S., leading to scandals and severe emotional harm to victims. Though deepfakes are created digitally, their impact extends to the real world, leaving victims feeling violated, anxious, and unsafe.
AI Safety vs. Open-Source Innovation: The Ongoing Debate

The case also ties into a broader debate within the AI community about safety, control, and accessibility:
- Proponents of Closed-Source AI argue that keeping AI models proprietary helps prevent abuse by limiting bad actors’ ability to disable safety mechanisms.
- Advocates for Open-Source AI believe that allowing the public to modify and improve AI models is essential for innovation and growth. They also believe that abuse can be mitigated without restricting access.
- Despite these debates, the immediate concern remains the spread of false and harmful content online, which has been exacerbated by AI-generated misinformation and deepfakes.
Legal Action Against Deepfake Abuses
Governments and law enforcement agencies are beginning to take more decisive action against deepfake-related crimes:
- In the U.S., several individuals have already been arrested for creating AI-generated deepfake images of minors.
- The NO FAKES Act, introduced in Congress in 2023, seeks to criminalize the creation of AI-generated images that exploit a person’s likeness without consent.
- In the United Kingdom, distributing deepfake pornography is already illegal, and upcoming legislation will make it a crime to produce such content as well.
- Australia has also recently enacted laws criminalizing the creation and sharing of non-consensual deepfakes.
Frequently Asked Questions
What prompted Microsoft’s legal action against sure developers?
Microsoft initiated legal proceedings after discovering that a group of developers, identified as part of the Storm-2139 cybercrime network, were circumventing the safety measures of its generative AI services to produce harmful and illicit content, including non-consensual intimate images and celebrity deepfakes.
Who are the developers named in Microsoft’s lawsuit?
The developers identified in the lawsuit are Arian Yadegarnia (“Fiz”) from Iran, Alan Krysiak (“Drago”) from the UK, Ricky Yuen (“cg-dot”) from Hong Kong, and Phát Phùng Tấn (“Asakuri”) from Vietnam.
How did these developers misuse Microsoft’s AI tools?
They exploited compromised customer credentials to access Microsoft’s generative AI services and bypassed built-in safety guardrails, enabling the creation of harmful content.
What specific content did the developers generate using Microsoft’s AI services?
The illicit content included non-consensual intimate images of celebrities and other sexually explicit material.
How did Microsoft respond upon discovering the misuse of its AI tools?
Microsoft revoked the cybercriminals’ access, implemented countermeasures, and enhanced safeguards to prevent future misuse.
What legal actions has Microsoft taken against the identified developers?
Microsoft filed a lawsuit to disrupt illicit operations, dismantle the tools used to bypass AI safety measures, and deter others from similar misuse.
Were there any additional participants identified in the scheme?
Yes, Microsoft identified two actors located in Illinois and Florida, whose identities remain undisclosed to avoid interfering with potential criminal investigations.
What measures did Microsoft employ to uncover the identities of the developers?
A court order allowed Microsoft to seize a website instrumental to the criminal operation, aiding in disrupting the scheme and uncovering its participants.
How did the developers react to Microsoft’s legal actions?
The unsealing of legal filings led to immediate reactions, with group members turning on each other and, in some instances, doxxing Microsoft’s lawyers by posting their personal information and photographs.
What is Microsoft’s broader stance on the misuse of its AI technology?
Microsoft is committed to protecting the public from abusive AI-generated content and has taken legal action to deter malicious actors from weaponizing its AI technology.
Conclusion
Microsoft’s legal action against developers misusing its AI tools underscores the company’s commitment to safeguarding its technology from exploitation. By identifying and suing individuals who bypassed AI safety measures to create harmful content, Microsoft aims to dismantle illicit operations and deter future misuse. This proactive approach highlights the importance of enforcing ethical standards in AI development and usage, ensuring that technological advancements benefit society while minimizing potential harm. Such measures are crucial in maintaining public trust and promoting the responsible evolution of AI technologies.