Microsoft has taken legal action against a group the company claims deliberately created and used tools to bypass safety guardrails in its cloud AI products.
According to a complaint filed by the company in December in the US District Court for the Eastern District of Virginia, a group of 10 unnamed defendants allegedly used stolen customer credentials and custom-designed software to infiltrate the Azure OpenAI ServicesMicrosoft’s fully managed service powered by ChatGPT creates OpenAI technologies.
In the complaint, Microsoft accused the defendants — which it refers to only as “Does,” a legal pseudonym — of violating the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, and a federal racketeering statute by illegally access and use of Microsoft software. and servers for the purpose of “offensive” and “harmful and objectionable content.” Microsoft does not provide specific details about the abusive content generated.
The company is seeking injunctive and “other equitable” relief and damages.
In the complaint, Microsoft said it discovered in July 2024 that customers with Azure OpenAI Service credentials — specifically API keys, the unique strings of characters used to authenticate an app or user – used to create content that violates the service’s accepted policy of use. Later, through an investigation, Microsoft learned that API keys had been stolen from paying customers, according to the complaint.
“The precise means by which the Defendants obtained all of the API Keys used to commit the misconduct described in this Complaint is unknown,” Microsoft’s complaint reads, “but it appears that the Defendants engaged in in a pattern of systematic API Key theft that enabled them to steal Microsoft API Keys from many Microsoft customers.”
Microsoft alleges that the defendants used stolen Azure OpenAI Service API keys belonging to US-based customers to carry out a “hacking-as-a-service” scheme. According to the complaint, to pull off this scheme, the defendants created a client-side tool called de3u, as well as software for processing and routing communications from de3u to Microsoft systems.
De3u allows users to use stolen API keys to generate images using DALL-Eone of the OpenAI models available to Azure OpenAI Service customers, without having to write their own code, Microsoft said. De3u also attempted to prevent the Azure OpenAI Service from modifying the prompts used to generate images, according to the complaint, which could happen, for example, if a text prompt contained word that triggers Microsoft’s content filtering.
A repo containing the de3u project code, hosted on GitHub — a Microsoft-owned company — was no longer accessible at press time.
“These features, combined with Defendants’ unlawful programmatic API access to the Azure OpenAI service, enabled Defendants to reverse engineer Microsoft’s content circumvention measures and abuse,” the complaint read. “The defendants knowingly and intentionally accessed computers protected by the Azure OpenAl Service without authorization, and as a result of such conduct caused damage and loss.”
In a blog post published Friday, Microsoft said that the court allowed it to seize a website “instrumental” in the operation of the defendants that would allow the company to collect evidence, decipher how the accused services were -monetize, and disrupt any additional technical infrastructure it finds. .
Microsoft also said it has “put in place countermeasures,” which the company did not specify, and “added additional safety controls” to the Azure OpenAI Service targeting the activity it observed.