Baltimore’s mayor and city council filed a lawsuit against Elon Musk’s xAI company on Tuesday, alleging that its Grok chatbot violated consumer protections by generating nonconsensual sexualized images, according to The Guardian. The city argues that xAI deceptively marketed Grok as a general-purpose AI assistant and X as a mainstream social media site, failing to disclose the risks, limitations, and exposure to harm that come with using the platform and chatbot.

City Claims Grok Flooded X With Harmful Content

The lawsuit. Filed in the circuit court for Baltimore city, argues that the court has jurisdiction over xAI given that the company advertises and operates in Baltimore, as the city’s complaint states that Grok has flooded the feeds of Baltimore’s X users with non-consensual intimate imagery (NCII) and child sexual abuse material (CSAM).

The complaint further alleges that Grok exposed Baltimore residents to the risk that any photograph they uploaded—of themselves or of their children—could be ingested by Grok and transformed into sexually degrading deepfakes without their knowledge or consent. xAI did not immediately return a request for comment.

xAI has faced multiple lawsuits and international investigations over its Grok AI product in recent months, following a period when the chatbot generated millions of AI-altered sexualized images earlier this year. According to researchers at the Center for Countering Digital Hate, many of these sexualized images were created using photos of women without their consent.

Grok Produced 23,000 Sexualized Images of Children

The Center for Countering Digital Hate estimated that Grok produced about 23,000 sexualized images of children over an 11-day period in December and January. The group has been investigating the spread of such content across various platforms, highlighting the risks posed by deepfakes.

Baltimore Mayor Brandon Scott said in a statement that the city will not stand by and allow the sexual exploitation of children to continue. He described the situation as a threat to privacy, dignity, and public safety, adding that those responsible must be held accountable.

Musk has denied any knowledge of Grok producing child sexual abuse material, stating in January that he was “not aware of any naked underage images generated by Grok. Literally zero.” The company added restrictions to Grok’s image generation capabilities in early January following backlash and threats of regulatory action from multiple countries.

Baltimore’s case is unique in that it is alleging violations of city ordinance and consumer protection, as opposed to other suits brought by individual users claiming personal and reputational harms. The city’s approach sets a precedent for how municipalities can hold tech companies accountable for the risks posed by their AI products.

City Sets a Precedent for AI Accountability

Adam Levitt, an attorney representing Baltimore in the case, said the city is setting a powerful example for municipalities nationwide in confronting a novel and rapidly advancing technology. He noted that accountability has not yet caught up with innovation in the field of AI.

In another case against xAI filed earlier this month, three Tennessee teenage girls alleged that Grok used photos of them to create and distribute child sexual abuse material. The class-action lawsuit was the first filed by minors following Grok’s nonconsensual image generation scandal. It also alleged that a third-party app used xAI’s technology to generate fully nude images of the girls, which were then shared online.

Baltimore’s lawsuit highlights the growing concerns over the misuse of AI technology and the need for stronger regulations to protect individuals from the risks of deepfake and image manipulation technologies. With the number of sexualized images increasing rapidly, the case could have significant implications for the future of AI oversight and legal frameworks.

The city’s legal action comes at a time when public trust in AI systems is under scrutiny. As more users rely on AI assistants like Grok for everyday tasks, the potential for misuse and exploitation of personal data is becoming a pressing issue for regulators and lawmakers around the world.