Microsoft has disclosed taking legal action against 10 unnamed defendants accused of exploiting its Azure OpenAI Service. According to a complaint filed in December 2024 in the U.S. District Court for the Eastern District of Virginia, the defendants allegedly used stolen customer credentials and custom software to bypass security measures, generating harmful content through the platform.
The company’s Digital Crimes Unit (DCU), which has been combatting cybercrime for nearly two decades, led the investigation. The lawsuit identifies three principal individuals who allegedly orchestrated the scheme, supported by others who distributed stolen credentials and provided tools to enable unauthorised access. The group used tools such as the “de3u” software and a reverse proxy service to manipulate Microsoft’s generative AI systems, which include OpenAI’s DALL-E.
“Every day, individuals leverage generative AI tools to enhance their creative expression and productivity,” wrote Microsoft DCU assistant general counsel Steven Masada in a company blog post. “Unfortunately, and as we have seen with the emergence of other technologies, the benefits of these tools attract bad actors who seek to exploit and abuse technology and innovation for malicious purposes. Microsoft recognises the role we play in protecting against the abuse and misuse of our tools as we and others across the sector introduce new capabilities.”
Microsoft’s complaint describes how the defendants used stolen API keys to gain unauthorised access to Azure OpenAI Service, circumventing protective measures designed to prevent misuse. These keys, often obtained through breaches or improper access, enabled the bypassing of content safety filters, allowing the generation and distribution of harmful material. The tools, including de3u, were sold to other malicious actors, along with detailed usage instructions.
Microsoft detected the suspicious activity in July 2024 during a review of irregular API usage. The investigation linked the stolen credentials to customers in the United States, including businesses in Pennsylvania and New Jersey. The defendants are also accused of operating a hacking-as-a-service platform, enabling broader access to their tools through domains such as “rentry.org/de3u” and “aitism.net.”
The defendants’ tools allegedly included features that circumvented Microsoft’s safety mechanisms, such as content filtering systems designed to detect and block harmful prompts. The reverse proxy service routed malicious traffic through Cloudflare tunnels, further concealing the origin of unauthorised activity.
Microsoft’s response and legal measures
Microsoft claims to have taken swift action upon detecting the breach, revoking compromised credentials and implementing additional safeguards to strengthen its systems against future incidents. The company also seized domains and servers associated with the operation, enabling it to gather evidence and disrupt further misuse.
The lawsuit accuses the defendants of violating multiple laws, including the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, and the Racketeer Influenced and Corrupt Organizations Act. Additional claims under Virginia state law include trespass to chattels and tortious interference. Microsoft is seeking damages and injunctive relief to hold the defendants accountable and prevent similar incidents.
The Azure OpenAI Service includes built-in content filtering and abuse detection technologies, which aim to mitigate risks associated with generative AI misuse. These safeguards were among the measures circumvented in the alleged scheme, underscoring the evolving threats faced by platforms offering advanced AI capabilities.
A report from the Capgemini Research Institute, released in late 2024, revealed that 97% of surveyed organisations reported experiencing at least one security breach related to generative AI during the preceding 12 months. The study, which surveyed 1,000 organisations across 13 countries, highlighted a stark rise in cybersecurity breaches. Over 90% of respondents reported at least one breach in the past year, a significant increase from 51% in 2021. Nearly half of the surveyed organisations estimated financial losses exceeding $50m over the last three years, underscoring the growing risks associated with generative AI technologies and their exploitation by malicious actors.